_id
stringlengths
40
40
text
stringlengths
0
10k
98aa1650071dd259f45742dae4b97ef13a9de08a
2105d6e014290cd0fd093479cc32cece51477a5a
Identifying incomplete or partial fingerprints from a large fingerprint database remains a difficult challenge today. Existing studies on partial fingerprints focus on one-to-one matching using local ridge details. In this paper, we investigate the problem of retrieving candidate lists for matching partial fingerprints by exploiting global topological features. Specifically, we propose an analytical approach for reconstructing the global topology representation from a partial fingerprint. First, we present an inverse orientation model for describing the reconstruction problem. Then, we provide a general expression for all valid solutions to the inverse model. This allows us to preserve data fidelity in the existing segments while exploring missing structures in the unknown parts. We have further developed algorithms for estimating the missing orientation structures based on some a priori knowledge of ridge topology features. Our statistical experiments show that our proposed model-based approach can effectively reduce the number of candidates for pair-wised fingerprint matching, and thus significantly improve the system retrieval performance for partial fingerprint identification.
7b71acff127c9bc736185343221f05aac4768ac0
Recovery of low-rank matrices has recently seen significant activity in many areas of science and engineering, motivated by recent theoretical results for exact reconstruction guarantees and interesting practical applications. In this paper, we present novel recovery algorithms for estimating low-rank matrices in matrix completion and robust principal component analysis based on sparse Bayesian learning (SBL) principles. Starting from a matrix factorization formulation and enforcing the low-rank constraint in the estimates as a sparsity constraint, we develop an approach that is very effective in determining the correct rank while providing high recovery performance. We provide connections with existing methods in other similar problems and empirical results and comparisons with current state-of-the-art methods that illustrate the effectiveness of this approach.
93ed6511a0ae5b13ccf445081ab829d415ca47df
Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural networks, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training framework with a graph-regularised objective, namely Neural Graph Machines, that can combine the power of neural networks and label propagation. This work generalises previous literature on graphaugmented training of neural networks, enabling it to be applied to multiple neural architectures (Feed-forward NNs, CNNs and LSTM RNNs) and a wide range of graphs. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs, with a runtime that is linear in the number of edges. The proposed joint training approach convincingly outperforms many existing methods on a wide range of tasks (multi-label classification on social graphs, news categorization, document classification and semantic intent classification), with multiple forms of graph inputs (including graphs with and without node-level features) and using different types of neural networks. Work done during an internship at Google.
a32e74d41a066d3dad15b020cce36cc1e3170e49
Graphical models bring together graph theory and probability theory in a powerful formalism for multivariate statistical modeling. In statistical signal processing— as well as in related fields such as communication theory, control theory and bioinformatics—statistical models have long been formulated in terms of graphs, and algorithms for computing basic statistical quantities such as likelihoods and marginal probabilities have often been expressed in terms of recursions operating on these graphs. Examples include hidden Markov models, Markov random fields, the forward-backward algorithm and Kalman filtering [ Rabiner and Juang (1993); Pearl (1988); Kailath et al. (2000)]. These ideas can be understood, unified and generalized within the formalism of graphical models. Indeed, graphical models provide a natural framework for formulating variations on these classical architectures, and for exploring entirely new families of statistical models. The recursive algorithms cited above are all instances of a general recursive algorithm known as the junction tree algorithm [ Lauritzen and Spiegelhalter, 1988]. The junction tree algorithm takes advantage of factorization properties of the joint probability distribution that are encoded by the pattern of missing edges in a graphical model. For suitably sparse graphs, the junction tree algorithm provides a systematic and practical solution to the general problem of computing likelihoods and other statistical quantities associated with a graphical model. Unfortunately, many graphical models of practical interest are not " suitably sparse, " so that the junction tree algorithm no longer provides a viable computational solution to the problem of computing marginal probabilities and other expectations. One popular source of methods for attempting to cope with such cases is the Markov chain Monte Carlo (MCMC) framework, and indeed there is a significant literature on
1d16975402e5a35c7e33b9a97fa85c689f840ded
In this paper, we present LSHDB, the first parallel and distributed engine for record linkage and similarity search. LSHDB materializes an abstraction layer to hide the mechanics of the Locality-Sensitive Hashing (a popular method for detecting similar items in high dimensions) which is used as the underlying similarity search engine. LSHDB creates the appropriate data structures from the input data and persists these structures on disk using a noSQL engine. It inherently supports the parallel processing of distributed queries, is highly extensible, and is easy to use.We will demonstrate LSHDB both as the underlying system for detecting similar records in the context of Record Linkage (and of Privacy-Preserving Record Linkage) tasks, as well as a search engine for identifying string values that are similar to submitted queries.
6f9ead16c5464a989dee4cc337d473e353ce54c7
We present a real-time gesture classification system for skeletal wireframe motion. Its key components include an angular representation of the skeleton designed for recognition robustness under noisy input, a cascaded correlation-based classifier for multivariate time-series data, and a distance metric based on dynamic time-warping to evaluate the difference in motion between an acquired gesture and an oracle for the matching gesture. While the first and last tools are generic in nature and could be applied to any gesture-matching scenario, the classifier is conceived based on the assumption that the input motion adheres to a known, canonical time-base: a musical beat. On a benchmark comprising 28 gesture classes, hundreds of gesture instances recorded using the XBOX Kinect platform and performed by dozens of subjects for each gesture class, our classifier has an average accuracy of 96:9%, for approximately 4-second skeletal motion recordings. This accuracy is remarkable given the input noise from the real-time depth sensor.
8abbc8a8bdb2c1d7193ecb2a49cb8f9344ce4141
We present an analysis of measuring stride-to-stride gait variability passively, in a home setting using two vision based monitoring techniques: anonymized video data from a system of two web-cameras, and depth imagery from a single Microsoft Kinect. Millions of older adults fall every year. The ability to assess the fall risk of elderly individuals is essential to allowing them to continue living safely in independent settings as they age. Studies have shown that measures of stride-to-stride gait variability are predictive of falls in older adults. For this analysis, a set of participants were asked to perform a number of short walks while being monitored by the two vision based systems, along with a marker based Vicon motion capture system for ground truth. Measures of stride-to-stride gait variability were computed using each of the systems and compared against those obtained from the Vicon.
386a8419dfb6a52e522cdab70ee8449422e529ba
PRIMARY OBJECTIVE The study aim is to explore the perceptions and expectations of seniors in regard to "smart home" technology installed and operated in their homes with the purpose of improving their quality of life and/or monitoring their health status. RESEARCH DESIGN AND METHODS Three focus group sessions were conducted within this pilot study to assess older adults' perceptions of the technology and ways they believe technology can improve their daily lives. Themes discussed in these groups included participants' perceptions of the usefulness of devices and sensors in health-related issues such as preventing or detecting falls, assisting with visual or hearing impairments, improving mobility, reducing isolation, managing medications, and monitoring of physiological parameters. The audiotapes were transcribed and a content analysis was performed. RESULTS A total of 15 older adults participated in three focus group sessions. Areas where advanced technologies would benefit older adult residents included emergency help, prevention and detection of falls, monitoring of physiological parameters, etc. Concerns were expressed about the user-friendliness of the devices, lack of human response and the need for training tailored to older learners. CONCLUSIONS All participants had an overall positive attitude towards devices and sensors that can be installed in their homes in order to enhance their lives.
3b8a4cc6bb32b50b29943ceb7248f318e589cd79
We present an efficient indexing method to locate 1-dimensional subsequences within a collection of sequences, such that the subsequences match a given (query) pattern within a specified tolerance. The idea is to map each data sequences into a small set of multidimensional rectangles in feature space. Then, these rectangles can be readily indexed using traditional spatial access methods, like the R*-tree [9]. In more detail, we use a sliding window over the data sequence and extract its features; the result is a trail in feature space. We propose an efficient and effective algorithm to divide such trails into sub-trails, which are subsequently represented by their Minimum Bounding Rectangles (MBRs). We also examine queries of varying lengths, and we show how to handle each case efficiently. We implemented our method and carried out experiments on synthetic and real data (stock price movements). We compared the method to sequential scanning, which is the only obvious competitor. The results were excellent: our method accelerated the search time from 3 times up to 100 times.
886431a362bfdbcc6dd518f844eb374950b9de86
ÐA new view-based approach to the representation and recognition of human movement is presented. The basis of the representation is a temporal templateÐa static vector-image where the vector value at each point is a function of the motion properties at the corresponding spatial location in an image sequence. Using aerobics exercises as a test domain, we explore the representational power of a simple, two component version of the templates: The first value is a binary value indicating the presence of motion and the second value is a function of the recency of motion in a sequence. We then develop a recognition method matching temporal templates against stored instances of views of known actions. The method automatically performs temporal segmentation, is invariant to linear changes in speed, and runs in real-time on standard platforms. Index TermsÐMotion recognition, computer vision.
846a1a0136e69554923301ea445372a57c6afd9d
The key challenge in multiagent learning is learning a best response to the behaviour of other agents, which may be non-stationary: if the other agents adapt their strategy as well, the learning target moves. Disparate streams of research have approached nonstationarity from several angles, which make a variety of implicit assumptions that make it hard to keep an overview of the state of the art and to validate the innovation and significance of new works. This survey presents a coherent overview of work that addresses opponent-induced non-stationarity with tools from game theory, reinforcement learning and multi-armed bandits. Further, we reflect on the principle approaches how algorithms model and cope with this non-stationarity, arriving at a new framework and five categories (in increasing order of sophistication): ignore, forget, respond to target models, learn models, and theory of mind. A wide range of state-of-the-art algorithms is classified into a taxonomy, using these categories and key characteristics of the environment (e.g., observability) and adaptation behaviour of the opponents (e.g., smooth, abrupt). To clarify even further we present illustrative variations of one domain, contrasting the strengths and limitations of each category. Finally, we discuss in which environments the different approaches yield most merit, and point to promising avenues of future research.
a3ee76b20df36976aceb16e9df93817255e26fd4
Research on question answering with knowledge base has recently seen an increasing use of deep architectures. In this extended abstract, we study the application of the neural machine translation paradigm for question parsing. We employ a sequence-to-sequence model to learn graph patterns in the SPARQL graph query language and their compositions. Instead of inducing the programs through question-answer pairs, we expect a semi-supervised approach, where alignments between questions and queries are built through templates. We argue that the coverage of language utterances can be expanded using late notable works in natural language generation.
4928aee4b9a558d8faaa6126201a45b7aaea7bb6
After more than a decade of comprehensive research work in the area of electronic government (e-government), no attempt has yet been made to undertake a systematic literature review on the costs, opportunities, benefits and risks that influence the implementation of e-government. This is particularly significant given the various related challenges that governments have faced over the years when implementing e-government initiatives. Hence, the aim of this paper is to undertake a comprehensive analysis of relevant literature addressing these issues using a systematic review of 132 studies identified from the Scopus online database and Google Scholar together with a manual review of relevant papers from journals dedicated to electronic government research such as Electronic Government, an International Journal (EGIJ), International Journal of Electronic Government Research (IJEGR) and Transforming Government: People, Process, and Policy (TGPPP). The overall review indicated that although a large number of papers discuss costs, opportunities, benefits and risks, treatment of these issues have tended to be superficial.Moreover, there is a lack of empirical studies which can statistically evaluate the performance of these constructs in relation to the various egovernment systems. Therefore, this research would help governments to better analyse the impact of costs, opportunities, benefits and risks on the success of e-government systems and its pre-adoption from an implementation perspective.
d990e96bff845b3c4005e20629c613bf6e2c5c40
2169acce9014fd4ce462da494b71a3d2ef1c8191
While it has not yet been proven, empirical evidence suggests that model generalization is related to local properties of the optima which can be described via the Hessian. We connect model generalization with the local property of a solution under the PAC-Bayes paradigm. In particular, we prove that model generalization ability is related to the Hessian, the higher-order “smoothness” terms characterized by the Lipschitz constant of the Hessian, and the scales of the parameters. Guided by the proof, we propose a metric to score the generalization capability of the model, as well as an algorithm that optimizes the perturbed model accordingly.
39ed372026adaf052d9c40613386da296ee552dc
DeepMatching (DM) is one of the state-of-art matching algorithms to compute quasi-dense correspondences between images. Recent optical flow methods use DeepMatching to find initial image correspondences and achieves outstanding performance. However, the key building block of DeepMatching, the correlation map computation, is time-consuming. In this paper, we propose a new algorithm, LSHDM, which addresses the problem by employing Locality Sensitive Hashing (LSH) to DeepMatching. The computational complexity is greatly reduced for the correlation map computation step. Experiments show that image matching can be accelerated by our approach in ten times or more compared to DeepMatching, while retaining comparable accuracy for optical flow estimation.
2fb8c7faf5e42dba3993a2b7cd8c6fd1b90d29ef
Literacy learning learning how to read and write, begins long before children enter school. One of the key skills to reading and writing is the ability to represent thoughts symbolically and share them in language with an audience who may not necessarily share the same temporal and spatial context for the story. Children learn and practice these important language skills everyday, telling stories with the peers and adults around them. In particular, storytelling in the context of peer collaboration provides a key environment for children to learn language skills important for literacy. In light of this, we designed Sam, an embodied conversational agent who tells stories collaboratively with children. Sam was designed to look like a peer for preschool children, but to tell stories in a developmentally advanced way: modeling narrative skills important for literacy. Results demonstrated that children who played with the virtual peer told stories that more closely resembled the virtual peer’s linguistically advanced stories: using more quoted speech and temporal and spatial expressions. In addition, children listened to Sam's stories carefully, assisting her and suggesting improvements. The potential benefits of having technology play a social role in young children’s literacy learning is discussed.
4623accb0524d3b000866709ec27f1692cc9b15a
f9791399e87bba3f911fd8f570443cf721cf7b1e
Capturing the compositional process which maps the meaning of words to that of documents is a central challenge for researchers in Natural Language Processing and Information Retrieval. We introduce a model that is able to represent the meaning of documents by embedding them in a low dimensional vector space, while preserving distinctions of word and sentence order crucial for capturing nuanced semantics. Our model is based on an extended Dynamic Convolution Neural Network, which learns convolution filters at both the sentence and document level, hierarchically learning to capture and compose low level lexical features into high level semantic concepts. We demonstrate the effectiveness of this model on a range of document modelling tasks, achieving strong results with no feature engineering and with a more compact model. Inspired by recent advances in visualising deep convolution networks for computer vision, we present a novel visualisation technique for our document networks which not only provides insight into their learning process, but also can be interpreted to produce a compelling automatic summarisation system for texts.
6f2cdce2eb8e6afdfd9e81316ff08f80e972cc47
We report on a four year academic research project to build a natural language processing platform in support of a large media company. The Computable News platform processes news stories, producing a layer of structured data that can be used to build rich applications. We describe the underlying platform and the research tasks that we explored building it. The platform supports a wide range of prototype applications designed to support different newsroom functions. We hope that this qualitative review provides some insight into the challenges involved in this type of project.
6ab5acb5f32ef2d28f91109d40e5e859a9c101bf
We introduce the challenge problem for generic video indexing to gain insight in intermediate steps that affect performance of multimedia analysis methods, while at the same time fostering repeatability of experiments. To arrive at a challenge problem, we provide a general scheme for the systematic examination of automated concept detection methods, by decomposing the generic video indexing problem into 2 unimodal analysis experiments, 2 multimodal analysis experiments, and 1 combined analysis experiment. For each experiment, we evaluate generic video indexing performance on 85 hours of international broadcast news data, from the TRECVID 2005/2006 benchmark, using a lexicon of 101 semantic concepts. By establishing a minimum performance on each experiment, the challenge problem allows for component-based optimization of the generic indexing issue, while simultaneously offering other researchers a reference for comparison during indexing methodology development. To stimulate further investigations in intermediate analysis steps that inuence video indexing performance, the challenge offers to the research community a manually annotated concept lexicon, pre-computed low-level multimedia features, trained classifier models, and five experiments together with baseline performance, which are all available at http://www.mediamill.nl/challenge/.
dc9681dbb3c9cc83b4636ec97680aa3326a7e7d0
Most existing video denoising algorithms assume a single statistical model of image noise, e.g. additive Gaussian white noise, which often is violated in practice. In this paper, we present a new patch-based video denoising algorithm capable of removing serious mixed noise from the video data. By grouping similar patches in both spatial and temporal domain, we formulate the problem of removing mixed noise as a low-rank matrix completion problem, which leads to a denoising scheme without strong assumptions on the statistical properties of noise. The resulting nuclear norm related minimization problem can be efficiently solved by many recently developed methods. The robustness and effectiveness of our proposed denoising algorithm on removing mixed noise, e.g. heavy Gaussian noise mixed with impulsive noise, is validated in the experiments and our proposed approach compares favorably against some existing video denoising algorithms.
005aea80a403da18f95fcb9944236a976d83580e
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
023f6fc69fe1f6498e35dbf85932ecb549d36ca4
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices {Xk ,Y k}, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix Y k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {Xk} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which 1, 000 × 1, 000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for 1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.
e63ade93d75bc8f34639e16e2b15dc018ec9c208
8f84fd69ea302f28136a756b433ad9a4711571c2
8cbef23c9ee2ae7c35cc691a0c1d713a6377c9f2
Andor, D., Alberti, C., Weiss, D., Severyn, A., Presta, A., Ganchev, K., Petrov, S., and Collins, M. (2016). Globally normalized transition-based neural networks. In Association for Computational Linguistics. Ballesteros, M., Goldberg, Y., Dyer, C., and Smith, N. A. (2016). Training with exploration improves a greedy stack-LSTM parser. Proceedings of the conference on empirical methods in natural language processing. Chen, D. and Manning, C. D. (2014). A fast and accurate dependency parser using neural networks. In Proceedings of the conference on empirical methods in natural language processing, pages 740–750. Cheng, H., Fang, H., He, X., Gao, J., and Deng, L. (2016). Bi-directional attention with agreement for dependency parsing. arXiv preprint arXiv:1608.02076. Hashimoto, K., Xiong, C., Tsuruoka, Y., and Socher, R. (2016). A joint many-task model: Growing a neural network for multiple nlp tasks. arXiv preprint arXiv:1611.01587. Kiperwasser, E. and Goldberg, Y. (2016). Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313–327. Kuncoro, A., Ballesteros, M., Kong, L., Dyer, C., Neubig, G., and Smith, N. A. (2016). What do recurrent neural network grammars learn about syntax? CoRR, abs/1611.05774. McDonald, R. T. and Pereira, F. C. (2006). Online learning of approximate dependency parsing algorithms. In EACL. Nivre, J., Hall, J., and Nilsson, J. (2006). Maltparser: A data-driven parser-generator for dependency parsing. In Proceedings of LREC, volume 6, pages 2216–2219.
8645a7ff78dc321e08dea6576c04f02a3ce158f9
Videos serve to convey complex semantic information and ease the understanding of new knowledge. However, when mixed semantic meanings from different modalities (i.e., image, video, text) are involved, it is more difficult for a computer model to detect and classify the concepts (such as flood, storm, and animals). This paper presents a multimodal deep learning framework to improve video concept classification by leveraging recent advances in transfer learning and sequential deep learning models. Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNN) models are then used to obtain the sequential semantics for both audio and textual models. The proposed framework is applied to a disaster-related video dataset that includes not only disaster scenes, but also the activities that took place during the disaster event. The experimental results show the effectiveness of the proposed framework.
119bb251cff0292cbf6bed27acdcad424ed9f9d0
This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields). We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, which exploit laws of large numbers to transform the original graphical model into a simplified graphical model in which inference is efficient. Inference in the simpified model provides bounds on probabilities of interest in the original model. We describe a general framework for generating variational transformations based on convex duality. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case.
17e9d3ba861db8a6d323e1410fe5ca0986d5ad6a
A non-parametric method for texture synthesis is proposed. The texture synthesis process grows a new image outward from an initial seed, one pixel at a time. A Markov random field model is assumed, and the conditional distribution of a pixel given all its neighbors synthesized so far is estimated by querying the sample image and finding all similar neighborhoods. The degree of randomness is controlled by a single perceptually intuitive parameter. The method aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and real-world textures.
54205667c1f65a320f667d73c354ed8e86f1b9d9
A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lagrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t → ∞ the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set.
cc6a972b3ce231aa86757ecfe6af7997e6623a13
In real-time speech recognition applications, the latency is an important issue. We have developed a character-level incremental speech recognition (ISR) system that responds quickly even during the speech, where the hypotheses are gradually improved while the speaking proceeds. The algorithm employs a speech-to-character unidirectional recurrent neural network (RNN), which is end-to-end trained with connectionist temporal classification (CTC), and an RNN-based character-level language model (LM). The output values of the CTC-trained RNN are character-level probabilities, which are processed by beam search decoding. The RNN LM augments the decoding by providing long-term dependency information. We propose tree-based online beam search with additional depth-pruning, which enables the system to process infinitely long input speech with low latency. This system not only responds quickly on speech but also can dictate out-of-vocabulary (OOV) words according to pronunciation. The proposed model achieves the word error rate (WER) of 8.90% on the Wall Street Journal (WSJ) Nov'92 20K evaluation set when trained on the WSJ SI-284 training set.
f4f3a10d96e0b6d134e7e347e1727b7438d4006f
1ecffbf969e0d46acfcdfb213e47027227d8667b
Although risk taking traditionally has been viewed as a unitary, stable individual difference variable, emerging evidence in behavioral decision-making research suggests that risk taking is a domain-specific construct. Utilizing a psychological risk-return framework that regresses risk taking on the perceived benefits and perceived riskiness of an activity (Weber & Milliman, 1997), this study examined the relations between risk attitude and broad personality dimensions using the new HEXACO personality framework (Lee &Ashton, 2004) across four risk domains. This personality framework, which has been replicated in lexical studies in over 12 natural languages, assess personality over six broad personality dimensions, as opposed to the traditional Five-Factor Model, or ‘‘Big Five.’’ Through path analysis, we regressed risk taking in four separate domains on risk perceptions, perceived benefits, and the six HEXACO dimensions. Across all risk domains, we found that the emotionality dimension was associated with heightened risk perceptions and high conscientiousness was associated with less perceived benefits. We also report several unique patterns of domain-specific relations between the HEXACO dimensions and risk attitude. Specifically, openness was associated with risk taking and perceived benefits for social and recreational risks, whereas lower honesty/humility was associated with greater health/safety and ethical risk taking. These findings extend our understanding of how individuals approach risk across a variety of contexts, and further highlight the utility of honesty/humility, a dimension not recovered in Big Five models, in individual differences research. Copyright # 2010 John Wiley & Sons, Ltd. key words risk taking; risk perception; risk-return framework; personality; HEXACO; honesty/humility Risk taking has traditionally been viewed as an enduring, stable, and domain-invariant construct in both behavioral decision making and personality research (e.g., Eysenck & Eysenck, 1977; Kahneman & Tversky, 1979; Paunonen & Jackson, 1996; Tellegen, 1985). However, recent advances suggest that risk taking is content, or domain, specific (Blais &Weber, 2006; Hanoch, Johnson, &Wilke, 2006; Soane & Chmiel, 2005; Weber, Blais, & Betz, 2002). In light of this knowledge, psychologists would benefit from a deeper ler, Decision Research, Eugene, OR 97401, USA. E-mail: [email protected]
34b9ba36c030cfb7a141e53156aa1591dfce3dcd
To control vehicle growth and air pollution, Beijing’s municipal government imposed a vehicle lottery system in 2011, which randomly allocated a quota of licenses to potential buyers. This paper investigates the effect of this policy on fleet composition, fuel consumption, air pollution, and social welfare. Using car registration data, we estimate a random coefficient discrete choice model and conduct counterfactual analysis based on the estimated parameters. We find that the lottery reduced new passenger vehicle *We gratefully acknowledge the kind help and constructive comments of Professors Andrew Cassey, Ana Espinola-Arredondo, Benjamin Cowan, Gregmar Galinato, Dong Lu, Jill McCluskey, Mark Gibson, Shanjun Li, Jia Yan, Dan Yang and seminar participants at Washington State University. Financial support from the China National Science Fund (grant #71620107005) is acknowledged. †Corresponding author. Email: [email protected] (Z. Yang), [email protected] (F. Muñoz-García), [email protected] (M. Tang). sales by 50.15%, fuel consumption by 48.69%, and pollutant emissions by 48.69% in 2012. Also, such lottery shifted new auto purchases towards high-end but less fuel efficient vehicles. In our counterfactual analysis, we show that a progressive tax scheme works better than the lottery system at decreasing fuel consumption and air pollution, and leads to a higher fleet fuel efficiency and less welfare loss.
fac0151ed0494caf10c7d778059f176ba374e29c
4a74eb59728f0d3a06302c668db44d434bd7d69e
Electronic money (or e-money) is money that is represented digitally and can be exchanged by means of smart card from one party to another without the need for an intermediary. It is anticipated that emoney will work just like paper money. One of its potential key features is anonymity. The proceeds of crime that are in the form of e-money could be used, to buy foreign currency and high value goods to be resold. E-money may therefore be used to place dirty money without having to smuggle cash or conduct face to face transactions.
d8237600841361f7811f5fd9effaed9d2e6e34b0
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
49a19fe67d8ef941af598e87775112f0b3191091
The main objective of this study is to create a fast, easy and an efficient algorithm for disease prediction , with less error rate and can apply with even large data sets and show reasonable patterns with dependent variables. For disease identification and prediction in data mining a new hybrid algorithm was constructed. The Disease Identification and Prediction(DIP) algorithm, which is the combination of decision tree and association rule is used to predict the chances of some of the disease hits in some particular areas. It also shows the relationship between different parameters for the prediction. To implement this algorithm using vb.net software was also developed.
dfb9eec6c6ae7d3e07123045c3468c9b57b2a7e2
903148db6796946182f27affc89c5045e6572ada
The hash join algorithm family is one of the leading techniques for equi-join performance evaluation. OLAP systems borrow this line of research to efficiently implement foreign key joins between dimension tables and big fact tables. From data warehouse schema and workload feature perspective, the hash join algorithm can be further simplified with multidimensional mapping, and the foreign key join algorithms can be evaluated from multiple perspectives instead of single performance perspective. In this paper, we introduce the surrogate key index oriented foreign key join as schema-conscious and OLAP workload customized design foreign key join to comprehensively evaluate how state-of-the-art join algorithms perform in OLAP workloads. Our experiments and analysis gave the following insights: (1) customized foreign key join algorithm for OLAP workload can make join performance step forward than general-purpose hash joins; (2) each join algorithm shows strong and weak performance regions dominated by the cache locality ratio of input_size/cache_size with a fine-grained micro join benchmark; (3) the simple hardware-oblivious shared hash table join outperforms complex hardware-conscious radix partitioning hash join in most benchmark cases; (4) the customized foreign key join algorithm with surrogate key index simplified the algorithm complexity for hardware accelerators and make it easy to be implemented for different hardware accelerators. Overall, we argue that improving join performance is a systematic work opposite to merely hardware-conscious algorithm optimizations, and the OLAP domain knowledge enables surrogate key index to be effective for foreign key joins in data warehousing workloads for both CPU and hardware accelerators.
4e8930ae948262a89acf2e43c8e8b6e902c312c4
Although image compression has been actively studied for decades, there has been relatively little research on learning to compress images with modern neural networks. Standard approaches, such as those employing patch-based autoencoders, have shown a great deal of promise but cannot compete with popular image codecs because they fail to address three questions: 1) how to effectively binarize activations: in the absence of binarization, a bottleneck layer alone tends not to lead to efficient compression; 2) how to achieve variable-rate encoding: a standard autoencoder generates a fixed-length code for each fixed-resolution input patch, resulting in the same cost for lowand high-entropy patches, and requiring the network to be completely retrained to achieve different compression rates; and 3) how to avoid block artifacts: patch-based approaches are prone to block discontinuities. We propose a general framework for variable-rate image compression and a novel architecture based on convolutional and deconvolutional recurrent networks, including LSTMs, that address these issues and report promising results compared to existing baseline codecs. We evaluate the proposed methods on a large-scale benchmark consisting of tiny images (32× 32), which proves to be very challenging for all the methods.
8f9376f3b71e9182c79531551d6e953cd02d7fe6
Enabling technologies for wireless sensor networks have gained considerable attention in research communities over the past few years. It is highly desirable, even necessary in certain situations, for wireless sensor nodes to be self-powered. With this goal in mind, a vibration based piezoelectric generator has been developed as an enabling technology for wireless sensor networks. The focus of this paper is to discuss the modeling, design, and optimization of a piezoelectric generator based on a two-layer bending element. An analytical model of the generator has been developed and validated. In addition to providing intuitive design insight, the model has been used as the basis for design optimization. Designs of 1 cm3 in size generated using the model have demonstrated a power output of 375 μW from a vibration source of 2.5 m s−2 at 120 Hz. Furthermore, a 1 cm3 generator has been used to power a custom designed 1.9 GHz radio transmitter from the same vibration source. (Some figures in this article are in colour only in the electronic version)
96989405985e2d90370185f1e025443b56106d1a
This paper describes discriminative language modeling for a large vocabulary speech recognition task. We contrast two parameter estimation methods: the perceptron algorithm, and a method based on maximizing the regularized conditional log-likelihood. The models are encoded as deterministic weighted finite state automata, and are applied by intersecting the automata with word-lattices that are the output from a baseline recognizer. The perceptron algorithm has the benefit of automatically selecting a relatively small feature set in just a couple of passes over the training data. We describe a method based on regularized likelihood that makes use of the feature set given by the perceptron algorithm, and initialization with the perceptron’s weights; this method gives an additional 0.5% reduction in word error rate (WER) over training with the perceptron alone. The final system achieves a 1.8% absolute reduction in WER for a baseline first-pass recognition system (from 39.2% to 37.4%), and a 0.9% absolute reduction in WER for a multi-pass recognition system (from 28.9% to 28.0%). 2006 Elsevier Ltd. All rights reserved.
4808ca8a317821bb70e226c5ca8c23241dd22586
This paper presents the state of the art from the available literature on mobile health care. The study was performed by means of a systematic review, a way of assessing and interpreting all available research on a particular topic in a given area, using a reliable and rigorous method. From an initial amount of 1,482 papers, we extracted and analysed data via a full reading of 40 (2.69%) of the papers that matched our selection criteria. Our analysis since 2010 show current development in 10 application areas and present ongoing trends and technical hallenges on the subject. The application areas include: patient monitoring, infrastructure, software architecture, modeling, framework, security, fications, multimedia, mobile cloud computing, and literature reviews on the topic. The most relevant challenges include the low battery life of devices, ultiplatform development, data transmission and security. Our paper consolidates ecent findings in the field and serves as a resourceful guide for future research planning and development.
c876c5fed5b6a3a91b5f55e1f776d629cc8ed9bc
675913f7d834cf54a6d90b33e929b999cd6afc7b
In the phase shifted full bridge (PSFB) pulse width modulation (PWM) converter, external snubber capacitors are connected in parallel to insulated gate bipolar transistors (IGBTs) in order to decrease turn-off losses. The zero voltage transition (ZVT) condition is not provided at light loads, thus the parallel capacitors discharge through IGBTs at turn on which causes switching losses and failure risk of the IGBTs. Capacitor discharge through IGBT restricts the use of high-value snubber capacitors, and turn-off loss of the IGBT increases at high currents. This problematic condition occurs especially at the lagging leg. In this study, a new technique enabling the use of high-value snubber capacitors with the lagging leg of the PSFB PWM converter is proposed. As advantages of the proposed technique, high-capacitive discharge current through IGBT is prevented at light loads, the turn-off switching losses of the IGBTs are decreased, and the performance of the converter is improved at high currents. The proposed PSFB PWM converter includes an auxiliary circuit, and it has a simple structure, low cost, and ease of control as well. The operation principle and detailed design procedure of the converter are presented. The theoretical analysis is verified exactly by a prototype of 75 kHz and 10 kW converter.
ce56eb0d9841b6e727077e9460b938f78506b324
The use of virtual reality through exergames or active video game, i.e. a new form of interactive gaming, as a complementary tool in rehabilitation has been a frequent focus in research and clinical practice in the last few years. However, evidence of their effectiveness is scarce in the older population. This review aim to provide a summary of the effects of exergames in improving physical functioning in older adults. A search for randomized controlled trials was performed in the databases EMBASE, MEDLINE, PsyInfo, Cochrane data base, PEDro and ISI Web of Knowledge. Results from the included studies were analyzed through a critical review and methodological quality by the PEDro scale. Thirteen studies were included in the review. The most common apparatus for exergames intervention was the Nintendo Wii gaming console (8 studies), followed by computers games, Dance video game with pad (two studies each) and only one study with the Balance Rehabilitation Unit. The Timed Up and Go was the most frequently used instrument to assess physical functioning (7 studies). According to the PEDro scale, most of the studies presented methodological problems, with a high proportion of scores below 5 points (8 studies). The exergames protocols and their duration varied widely, and the benefits for physical function in older people remain inconclusive. However, a consensus between studies is the positive motivational aspect that the use of exergames provides. Further studies are needed in order to achieve better methodological quality, external validity and provide stronger scientific evidence.
08f9a62cdbe43fca7199147123a7d957892480af
EMV, also known as "Chip and PIN", is the leading system for card payments worldwide. It is used throughout Europe and much of Asia, and is starting to be introduced in North America too. Payment cards contain a chip so they can execute an authentication protocol. This protocol requires point-of-sale (POS) terminals or ATMs to generate a nonce, called the unpredictable number, for each transaction to ensure it is fresh. We have discovered two serious problems: a widespread implementation flaw and a deeper, more difficult to fix flaw with the EMV protocol itself. The first flaw is that some EMV implementers have merely used counters, timestamps or home-grown algorithms to supply this nonce. This exposes them to a "pre-play" attack which is indistinguishable from card cloning from the standpoint of the logs available to the card-issuing bank, and can be carried out even if it is impossible to clone a card physically. Card cloning is the very type of fraud that EMV was supposed to prevent. We describe how we detected the vulnerability, a survey methodology we developed to chart the scope of the weakness, evidence from ATM and terminal experiments in the field, and our implementation of proof-of-concept attacks. We found flaws in widely-used ATMs from the largest manufacturers. We can now explain at least some of the increasing number of frauds in which victims are refused refunds by banks which claim that EMV cards cannot be cloned and that a customer involved in a dispute must therefore be mistaken or complicit. The second problem was exposed by the above work. Independent of the random number quality, there is a protocol failure: the actual random number generated by the terminal can simply be replaced by one the attacker used earlier when capturing an authentication code from the card. This variant of the pre-play attack may be carried out by malware in an ATM or POS terminal, or by a man-in-the-middle between the terminal and the acquirer. We explore the design and implementation mistakes that enabled these flaws to evade detection until now: shortcomings of the EMV specification, of the EMV kernel certification process, of implementation testing, formal analysis, and monitoring customer complaints. Finally we discuss countermeasures. More than a year after our initial responsible disclosure of these flaws to the banks, action has only been taken to mitigate the first of them, while we have seen a likely case of the second in the wild, and the spread of ATM and POS malware is making it ever more of a threat.
15f5ce559c8f3ea14a59cf49bacead181545dfb0
We construct a short group signature scheme. Signatures in our scheme are approximately the size of a standard RSA signature with the same security. Security of our group signature is based on the Strong Diffie-Hellman assumption and a new assumption in bilinear groups called the Decision Linear assumption. We prove security of our system, in the random oracle model, using a variant of the security definition for group signatures recently given by Bellare, Micciancio, and Warinschi.
96084442678300ac8be7778cf10a1379d389901f
In this paper, two different approaches for the additive manufacturing of microwave component are discussed. The first approach consists is the popular and cheap fused deposition modeling (FDM). In this paper it is shown that, by using different infill factor, FDM is suitable for the manufacturing of devices with controlled dielectric constant. The second approach is the Stereolithography (SLA). With this approach, better results can be obtained in terms of resolution. Furthermore a very easy way to copper plate the surface of microwave devices is shown and it effectiveness is demonstrated through the manufacturing and measurement of a two-pole filter with mushroom-shaped resonators.
0668aba8199335b347a5c8d0cdd8e75cb7cd6122
Automatic understanding of food is an important research challenge. Food recognition engines can provide a valid aid for automatically monitoring the patient's diet and food-intake habits directly from images acquired using mobile or wearable cameras. One of the first challenges in the field is the discrimination between images containing food versus the others. Existing approaches for food vs non-food classification have used both shallow and deep representations, in combination with multi-class or one-class classification approaches. However, they have been generally evaluated using different methodologies and data, making a real comparison of the performances of existing methods unfeasible. In this paper, we consider the most recent classification approaches employed for food vs non-food classification, and compare them on a publicly available dataset. Different deep-learning based representations and classification methods are considered and evaluated.
1f7d9319714b603d87762fa60e47b0bb40db25b5
Several standard text-categorization techniques were applied to the problem of automated essay grading. Bayesian independence classifiers and knearest-neighbor classifiers were trained to assign scores to manually-graded essays. These scores were combined with several other summary text measures using linear regression. The classifiers and regression equations were then applied to a new set of essays. The classifiers worked very well. The agreement between the automated grader and the final manual grade was as good as the agreement between human graders.
81dbf427ba087cf3a0f22b59e74d049f881bbbee
Recent advances in technology and in ideology have unlocked entirely new directions for education research. Mounting pressure from increasing tuition costs and free, online course offerings is opening discussion and catalyzing change in the physical classroom. The flipped classroom is at the center of this discussion. The flipped classroom is a new pedagogical method, which employs asynchronous video lectures and practice problems as homework, and active, group-based problem solving activities in the classroom. It represents a unique combination of learning theories once thought to be incompatible—active, problem-based learning activities founded upon a constructivist ideology and instructional lectures derived from direct instruction methods founded upon behaviorist principles. This paper provides a comprehensive survey of prior and ongoing research of the flipped classroom. Studies are characterized on several dimensions. Among others, these include the type of in-class and out-of-class activities, the measures used to evaluate the study, and methodological characteristics for each study. Results of this survey show that most studies conducted to date explore student perceptions and use single-group study designs. Reports of student perceptions of the flipped classroom are somewhat mixed, but are generally positive overall. Students tend to prefer in-person lectures to video lectures, but prefer interactive classroom activities over lectures. Anecdotal evidence suggests that student learning is improved for the flipped compared to traditional classroom. However, there is very little work investigating student learning outcomes objectively. We recommend for future work studies investigating of objective learning outcomes using controlled experimental or quasi-experimental designs. We also recommend that researchers carefully consider the theoretical framework used to guide the design of in-class activities. 1 The Rise of the Flipped Classroom There are two related movements that are combining to change the face of education. The first of these is a technological movement. This technological movement has enabled the amplification and duplication of information at an extremely low-cost. It started with the printing press in the 1400s, and has continued at an ever-increasing rate. The electronic telegraph came in the 1830s, wireless radio in the late 1800s and early 1900s, television in the 1920s, computers in the 1940s, the internet in the 1960s, and the world-wide web in the 1990s. As these technologies have been adopted, the ideas that have been spread through their channels have enabled a second movement. Whereas the technological movement sought to overcome real physical barriers to the free and open flow of information, this ideological movement seeks to remove the artificial, man-made barriers. This is epitomized in the free software movement (see, e.g., Stallman and Lessig [67]), although this movement is certainly not limited to software. A good example of this can be seen from the encyclopedia. Encyclopedia Britannica has been continuously published for nearly 250 years [20] (since 1768). Although Encyclopedia Britannica content has existed digitally since 1981, it was not until the advent of Wikipedia in 2001 that open access to encyclopedic content became available to users worldwide. Access to Encyclopedia Britannica remains restricted to a limited number of paid subscribers [21], but access to Wikipedia is open, and the website receives over 2.7 billion US monthly page views [81]. Thus, although the technology and digital content was available to enable free access to encyclopedic content, ideological roadblocks prevented this from happening. It was not until these ideologies had been overcome that humanity was empowered to create what has become the world’s largest, most up-to-date encyclopedia [81]. In a similar way, we are beginning to see the combined effects of these two movements on higher education. In the technological arena, research has made significant advances. Studies show that video lectures (slightly) outperform in-person lectures [9], with interactive online videos doing even better (Effect size=0.5) [83,51]. Online homework is just as effective as paper-and-pencil homework [8,27], and carefully developed intelligent tutoring systems have been shown to be just as effective as human tutors [77]. Despite these advancements, adoption has been slow, as the development of good educational systems can be prohibitively expensive. However, the corresponding ideological movement is breaking down these financial barriers. Ideologically, MIT took a significant step forward when it announced its OpenCourseWare (OCW) initiative in 2001 [53]. This opened access to information that had previously only been available to students who paid university tuition, which is over $40,000/yr at MIT [54]. Continuing this trend, MIT alum Salman Khan founded the Khan Academy in 2006, which has released a library of over 3200 videos and 350 practice exercises 2012. The stated mission of the Khan Academy is to provide “a free world-class education to anyone anywhere2012.” In the past year, this movement has rapidly gained momentum. Inspired by Khan’s efforts, Stanford professors Sebastian Thrun and Andrew Ng opened access to their online courses in Fall 2011. Thrun taught artificial intelligence with Peter Norvig, attracting over 160,000 students to their free online course. Subsequently, Thrun left the university and founded Udacity, which is now hosting 11 free courses [76]. With support from Stanford, Ng also started his own open online educational initiative, Coursera. Princeton, the University of Pennsylvania, and the University of Michigan have joined the Coursera partnership, which has expanded its offerings to 42 courses [10]. MIT has also upgraded its open educational initiative, and joined with Harvard in a $60 million dollar venture, edX [19]. EdX will, “offer Harvard and MIT classes online for free.” While online education is improving, expanding, and becoming openly available for free, university tuition at brick-and-mortar schools is rapidly rising [56]. Tuition in the University of California system has nearly tripled since 2000 [32]. Naturally, this is not being received well by university students in California [2]. Likewise, students in Quebec are actively protesting planned tuition hikes [13]. In resistance to planned tuition hikes, student protestors at Rutgers interrupted (on June 20, 2012) a board meeting to make their voices heard [36]. Adding fuel to the fire, results from a recent study by Gillen et al. [31] indicate that undergraduate student tuition is used to subsidize research. As a result, the natural question being asked by both students and educational institutions is exactly what students are getting for their money. This is applying a certain pressure on physical academic institutions to improve and enhance the in-person educational experience of their
f395edb9c8ca5666ec54d38fda289e181dbc5d1b
In this paper, a new voltage source converter for medium voltage applications is presented which can operate over a wide range of voltages (2.4-7.2 kV) without the need for connecting the power semiconductor in series. The operation of the proposed converter is studied and analyzed. In order to control the proposed converter, a space-vector modulation (SVM) strategy with redundant switching states has been proposed. SVM usually has redundant switching state anyways. What is the main point we are trying to get to? These redundant switching states help to control the output voltage and balance voltages of the flying capacitors in the proposed converter. The performance of the converter under different operating conditions is investigated in MATLAB/Simulink environment. The feasibility of the proposed converter is evaluated experimentally on a 5-kVA prototype.
81a4183d5042a93356bc59cda54ede3283efe583
This paper presents an approach to people identification using gait based on floor pressure data. By using a large area high resolution pressure sensing floor, we were able to obtain 3D trajectories of the center of foot pressures over a footstep which contain both the 1D pressure profile and 2D position trajectories of the COP. Based on the 3D COP trajectories a set of features are then extracted and used for people identification together with other features such as stride length and cadence. The Fisher linear discriminant is used as the classifier. Encouraging results have been obtained using the proposed method with an average recognition rate of 94% and false alarm rate of 3% using pair-wise footstep data from 10 subjects.
6b6fa87688f1e0ddb676a9ce5d18a7185f98d0c5
Traditional indoor laser scanning trolley/backpacks with multi-laser scanner, panorama cameras, and an inertial measurement unit (IMU) installed are a popular solution to the 3D indoor mapping problem. However, the cost of those mapping suits is quite expensive, and can hardly be replicated by consumer electronic components. The consumer RGB-Depth (RGB-D) camera (e.g., Kinect V2) is a low-cost option for gathering 3D point clouds. However, because of the narrow field of view (FOV), its collection efficiency and data coverages are lower than that of laser scanners. Additionally, the limited FOV leads to an increase of the scanning workload, data processing burden, and risk of visual odometry (VO)/simultaneous localization and mapping (SLAM) failure. To find an efficient and low-cost way to collect 3D point clouds data with auxiliary information (i.e., color) for indoor mapping, in this paper we present a prototype indoor mapping solution that is built upon the calibration of multiple RGB-D sensors to construct an array with large FOV. Three time-of-flight (ToF)-based Kinect V2 RGB-D cameras are mounted on a rig with different view directions in order to form a large field of view. The three RGB-D data streams are synchronized and gathered by the OpenKinect driver. The intrinsic calibration that involves the geometry and depth calibration of single RGB-D cameras are solved by homography-based method and ray correction followed by range biases correction based on pixel-wise spline line functions, respectively. The extrinsic calibration is achieved through a coarse-to-fine scheme that solves the initial exterior orientation parameters (EoPs) from sparse control markers and further refines the initial value by an iterative closest point (ICP) variant minimizing the distance between the RGB-D point clouds and the referenced laser point clouds. The effectiveness and accuracy of the proposed prototype and calibration method are evaluated by comparing the point clouds derived from the prototype with ground truth data collected by a terrestrial laser scanner (TLS). The overall analysis of the results shows that the proposed method achieves the seamless integration of multiple point clouds from three Kinect V2 cameras collected at 30 frames per second, resulting in low-cost, efficient, and high-coverage 3D color point cloud collection for indoor mapping applications.
330f258e290adc2f78820eddde589946f775ae65
Rough set theory can be applied to rule induction. There are two different types of classification rules, positive and boundary rules, leading to different decisions and consequences. They can be distinguished not only from the syntax measures such as confidence, coverage and generality, but also the semantic measures such as decision-monotocity, cost and risk. The classification rules can be evaluated locally for each individual rule, or globally for a set of rules. Both the two types of classification rules can be generated from, and interpreted by, a decision-theoretic model, which is a probabilistic extension of the Pawlak rough set model. As an important concept of rough set theory, an attribute reduct is a subset of attributes that are jointly sufficient and individually necessary for preserving a particular property of the given information table. This paper addresses attribute reduction in decision-theoretic rough set models regarding different classification properties, such as: decisionmonotocity, confidence, coverage, generality and cost. It is important to note that many of these properties can be truthfully reflected by a single measure c in the Pawlak rough set model. On the other hand, they need to be considered separately in probabilistic models. A straightforward extension of the c measure is unable to evaluate these properties. This study provides a new insight into the problem of attribute reduction. Crown Copyright 2008 Published by Elsevier Inc. All rights reserved.
642db624b5b33a02a435ee1415d7c9f9cef36e1d
This paper extends previous work with Dyna a class of architectures for intelligent systems based on approximating dynamic program ming methods Dyna architectures integrate trial and error reinforcement learning and execution time planning into a single process operating alternately on the world and on a learned model of the world In this paper I present and show results for two Dyna archi tectures The Dyna PI architecture is based on dynamic programming s policy iteration method and can be related to existing AI ideas such as evaluation functions and uni versal plans reactive systems Using a nav igation task results are shown for a simple Dyna PI system that simultaneously learns by trial and error learns a world model and plans optimal routes using the evolving world model The Dyna Q architecture is based on Watkins s Q learning a new kind of rein forcement learning Dyna Q uses a less famil iar set of data structures than does Dyna PI but is arguably simpler to implement and use We show that Dyna Q architectures are easy to adapt for use in changing environments Introduction to Dyna How should a robot decide what to do The traditional answer in AI has been that it should deduce its best action in light of its current goals and world model i e that it should plan However it is now widely recognized that planning s usefulness is limited by its computational complexity and by its dependence on an accurate world model An alternative approach is to do the planning in advance and compile its result into a set of rapid reactions or situation action rules which are then used for real time decision making Yet a third approach is to learn a good set of reactions by trial and error this has the advantage of eliminating the dependence on a world model In this paper I brie y introduce Dyna a class of simple architectures integrating and permitting tradeo s among these three approaches Dyna architectures use machine learning algo rithms to approximate the conventional optimal con trol technique known as dynamic programming DP Bellman Ross DP itself is not a learn ing method but rather a computational method for determining optimal behavior given a complete model of the task to be solved It is very similar to state space search but di ers in that it is more incremental and never considers actual action sequences explicitly only single actions at a time This makes DP more amenable to incremental planning at execution time and also makes it more suitable for stochastic or in completely modeled environments as it need not con sider the extremely large number of sequences possi ble in an uncertain environment Learned world mod els are likely to be stochastic and uncertain making DP approaches particularly promising for learning sys tems Dyna architectures are those that learn a world model online while using approximations to DP to learn and plan optimal behavior Intuitively Dyna is based on the old idea that planning is like trial and error learning from hypothet ical experience Craik Dennett The theory of Dyna is based on the theory of DP e g Ross and on DP s relationship to reinforcement learning Watkins Barto Sutton Watkins to temporal di erence learning Sutton and to AI methods for planning and search Korf Werbos has previously argued for the general idea of building AI systems that approx imate dynamic programming and Whitehead and others Sutton Barto Sutton Pinette Rumelhart et al have presented results for the speci c idea of augmenting a reinforcement learning system with a world model used for planning Dyna PI Dyna by Approximating Policy Iteration I call the rst Dyna architecture Dyna PI because it is based on approximating a DP method known as pol icy iteration Howard The Dyna PI architec ture consists of four components interacting as shown in Figure The policy is simply the function formed by the current set of reactions it receives as input a description of the current state of the world and pro duces as output an action to be sent to the world The world represents the task to be solved prototypi cally it is the robot s external environment The world receives actions from the policy and produces a next state output and a reward output The overall task is de ned as maximizing the long term average reward per time step cf Russell The architecture also includes an explicit world model The world model is intended to mimic the one step input output behavior of the real world Finally the Dyna PI architecture in cludes an evaluation function that rapidly maps states to values much as the policy rapidly maps states to actions The evaluation function the policy and the world model are each updated by separate learning processes WORLD Action Reward (scalar) Heuristic Reward (scalar) State EVALUATION FUNCTION
5991fee5265df4466627ebba62e545a242d9e22d
We use an autoencoder composed of stacked restricted Boltzmann machines to extract features from the history of individual stock prices. Our model is able to discover an enhanced version of the momentum effect in stocks without extensive hand-engineering of input features and deliver an annualized return of 45.93% over the 1990-2009 test period versus 10.53% for basic momentum.
d6cc46d8da91ded74ff31785000edc9ca8d67e23
In this work, a wide-band, planar, printed inverted-F antenna (PIFA) is proposed with multiple-input-multiple-output (MIMO) antenna configuration. The MIMO antenna system consists of 4-elements operating at 2.1 GHz frequency band for 4G LTE applications. The proposed design is compact, low profile and suitable for wireless handheld devices. The MIMO antenna is fabricated on commercially available FR4 substrate with εr equal to 4.4. The dimensions of single element are 26×6 mm2 with board volume equal to 100×60×0.8 mm3. Isolation is improved by 5 dB in the proposed design using ground slots. Characteristics mode analysis (CMA) is used to analyze the behaviour of the antenna system.
2586dd5514cb203f42292f25238f1537ea5e4b8c
f20fbad0632fdd7092529907230f69801c382c0f
The push for 100Gb/s optical transport and beyond necessitates electronic components at higher speed and integration level in order to drive down cost, complexity and size of transceivers [1-2]. This requires parallel multi-channel optical transceivers each operating at 25Gb/s and beyond. Due to variations in the output power of transmitters and in some cases different optical paths the parallel receivers have to operate at different input optical power levels. This trend places increasing strain to the acceptable inter-channel crosstalk in integrated multi-channel receivers [3]. Minimizing this cross-talk penalty when all channels are operational is becoming increasingly important in ultra-high throughput optical links.
430ddd5f2ed668e4c77b529607afa378453e11be
In this work, we study the impact of the word order decoding direction for statistical machine translation (SMT). Both phrase-based and hierarchical phrasebased SMT systems are investigated by reversing the word order of the source and/or target language and comparing the translation results with the normal direction. Analysis are done on several components such as alignment model, language model and phrase table to see which of them accounts for the differences generated by various translation directions. Furthermore, we propose to use system combination, alignment combinations and phrase table combinations to take benefit from systems trained with different translation directions. Experimental results show improvements of up to 1.7 points in BLEU and 3.1 points in TER compared to the normal direction systems for the NTCIR9 Japanese-English and Chinese-English tasks.
7a1e584f9a91472d6e15184f1648f57256216198
http://www.jstor.org A Language and Program for Complex Bayesian Modelling Author(s): W. R. Gilks, A. Thomas and D. J. Spiegelhalter Source: Journal of the Royal Statistical Society. Series D (The Statistician), Vol. 43, No. 1, Special Issue: Conference on Practical Bayesian Statistics, 1992 (3) (1994), pp. 169-177 Published by: for the Wiley Royal Statistical Society Stable URL: http://www.jstor.org/stable/2348941 Accessed: 19-08-2014 17:40 UTC
062ece9dd7019b0a3ca7e789acf1dee57571e26d
n the light of continuing debate over the applications of significance testing in psychology journals and following the publication of Cohen's (1994) article, the Board of Scientific Affairs (BSA) of the American Psychological Association (APA) convened a committee called the Task Force on Statistical Inference (TFSI) whose charge was "to elucidate some of the controversial issues surrounding applications of statistics including significance testing and its alternatives; alternative underlying models and data transformation; and newer methods made possible by powerful computers" (BSA, personal communication, February 28, 1996). Robert Rosenthal, Robert Abelson, and Jacob Cohen (cochairs) met initially and agreed on the desirability of having several types of specialists on the task force: statisticians, teachers of statistics, journal editors, authors of statistics books, computer experts, and wise elders. Nine individuals were subsequently invited to join and all agreed. These were Leona Aiken, Mark Appelbaum, Gwyneth Boodoo, David A. Kenny, Helena Kraemer, Donald Rubin, Bruce Thompson, Howard Wainer, and Leland Wilkinson. In addition, Lee Cronbach, Paul Meehl, Frederick Mosteller and John Tukey served as Senior Advisors to the Task Force and commented on written materials. The TFSI met twice in two years and corresponded throughout that period. After the first meeting, the task force circulated a preliminary report indicating its intention to examine issues beyond null hypothesis significance testing. The task force invited comments and used this feedback in the deliberations during its second meeting. After the second meeting, the task force recommended several possibilities for further action, chief of which would be to revise the statistical sections of the American Psychological Association Publication Manual (APA, 1994). After extensive discussion, the BSA recommended that "before the TFSI undertook a revision of the APA Publication Manual, it might want to consider publishing an article in American Psychologist, as a way to initiate discussion in the field about changes in current practices of data analysis and reporting" (BSA, personal communication, November 17, 1997). This report follows that request. The sections in italics are proposed guidelines that the TFSI recommends could be used for revising the APA publication manual or for developing other BSA supporting materials. Following each guideline are comments, explanations, or elaborations assembled by Leland Wilkinson for the task force and under its review. This report is concerned with the use of statistical methods only and is not meant as an assessment of research methods in general. Psychology is a broad science. Methods appropriate in one area may be inappropriate in another. The title and format of this report are adapted from a similar article by Bailar and Mosteller (1988). That article should be consulted, because it overlaps somewhat with this one and discusses some issues relevant to research in psychology. Further detail can also be found in the publications on this topic by several committee members (Abelson, 1995, 1997; Rosenthal, 1994; Thompson, 1996; Wainer, in press; see also articles in Harlow, Mulaik, & Steiger, 1997).
21e2150b6cc03bc6f51405473f57efff598c77bc
We argue that similarity judgments are inferences about generative processes, and that two objects appear similar when they are likely to have been generated by the same process. We describe a formal model based on this idea and show how featural and spatial models emerge as special cases. We compare our approach to the transformational approach, and present an experiment where our model performs better than a transformational model. Every object is the outcome of a generative process. An animal grows from a fertilized egg into an adult, a city develops from a settlement into a metropolis, and an artifact is assembled from a pile of raw materials according to the plan of its designer. Observations like these motivate the generative approach, which proposes that an object may be understood by thinking about the process that generated it. The promise of the approach is that apparently complex objects may be produced by simple processes, an insight that has proved productive across disciplines including biology [18], physics [21], and architecture [1]. To give two celebrated examples from biology, the shape of a pinecone and the markings on a cheetah’s tail can be generated by remarkably simple processes of growth. These patterns can be characterized much more compactly by describing their causal history than by attempting to describe them directly. Leyton has argued that the generative approach provides a general framework for understanding cognition. Applications of the approach can be found in generative theories of perception [12], memory [12], language [3], categorization [2], and music [11]. This paper offers a generative theory of similarity, a notion often invoked by models of high-level cognition. We argue that two objects are similar to the extent that they seem to have been generated by the same underlying process. The literature on similarity covers settings that extend from the comparison of simple stimuli like tones and colored patches to the comparison of highly-structured objects like narratives. The generative approach is relevant to the entire spectrum of applications, but we are particularly interested in high-level similarity. In particular, we are interested in how similarity judgments draw on intuitive theories, or systems of rich conceptual knowledge [15]. Generative processes and theories are intimately linked. Murphy [14], for example, defines a theory as ‘a set of causal relations that collectively generate or explain the phenomena in a domain.’ We hope that our generative theory provides a framework in which to model how similarity judgments emerge from intuitive theories. We develop a formal theory of similarity and compare it to three existing theories. The featural account [20] suggests that the similarity of two objects is a function of their common and distinctive features, the spatial account suggests that similarity is inversely proportional to distance in a spatial representation, [19] and the transformation account suggests that similarity depends on the number of operations required to transform one object into the other [6]. We show that versions of each of these approaches emerge as special cases of our generative approach, and present an experiment that directly compares our approach with the transformation account. A fourth theory suggests that similarity relies on a process of analogical mapping [5]. We will not discuss this approach in detail, but finish by suggesting how a generative approach to analogy differs from the standard view. Generative processes and similarity Before describing our formal model, we give an informal motivation for a generative approach to similarity. Suppose we are shown a prototype object and asked to describe similar objects we might find in the world. There are two kinds of answers: small perturbations of the prototype, or objects produced by small perturbations of the process that generated the prototype. The second strategy is likely to be more successful than the first, since many perturbations of the prototype will not arise from any plausible generative process, and thus could never appear in practice. By construction, however, an object produced by a perturbation of an existing generative process will have a plausible causal history. To give a concrete example, suppose the prototype is a bug generated by a biological process of growth (Figure 1ii). The bug in i is a small perturbation of the prototype, but seems unlikely to arise since legs are generated in pairs. A perturbation of the generative process might produce a bug with more segments, such as the bug in iii. If we hope to find a bug that is similar but not identical to the prototype, iii is a better bet than i. A sceptic might argue that this one-shot learning problem can be solved by taking the intersection of the set of objects similar to the prototype and the set of obi) ii) Prototype iii) Figure 1: Three bugs. Which is more similar to the prototype — i or iii? jects that are likely to exist. The second set depends critically on generative processes, but the first set (and therefore the notion of similarity) need not. We think it more likely that the notion of similarity is ultimately grounded in the world, and that it evolved for the purpose of comparing real-world objects. If so, then knowledge about what kinds of objects are likely to exist may be deeply bound up with the notion of similarity. The one-shot learning problem is of practical importance, but is not the standard context in which similarity is discussed. More commonly, subjects are shown a pair of objects and asked to rate the similarity of the pair. Note that both objects are observed to exist and the previous argument does not apply. Yet generative processes are still important, since they help pick out the features critical for the similarity comparison. Suppose, for instance, that a forest-dweller discovers a nutritious mushroom. Which is more similar to the mushroom: a mushroom identical except for its size, or a mushroom identical except for its color? Knowing how mushrooms are formed suggests that size is not a key feature. Mushrooms grow from small to large, and the final size of a plant depends on factors like the amount of sunlight it received and the fertility of the soil that it grew in. Reflections like these suggest that the differently-sized mushroom should be judged more similar. A final reason why generative processes matter is that they are deeply related to essentialism. Medin and Ortony [13] note that ‘surface features are frequently constrained by, and sometimes generated by, the deeper, more central parts of objects.’ Even if we observe only the surface features of two objects, it may make sense to judge their similarity by comparing the deeper properties inferred to generate the surface features. Yet we can say more: just as surface features are generated by the essence of the object, the essence itself has a generative history. Surface features are often reliable guides to the essence of an object, but the object’s causal history is a still more reliable indicator, if not a defining criterion of its essence. Keil [9] discusses the case of an animal that is born a skunk, then undergoes surgery that leaves it looking exactly like a raccoon. Since the animal is generated in the same way as a skunk (born of skunk parents), we conclude that it remains a skunk, no matter how it appears on the surface. These examples suggest that the generative approach may help to explain a broad class of theory-dependent inferences. We now present a formal model that attempts to capture the intuitions behind all of these cases. A computational theory of similarity Given a domain D, we develop a theory that specifies the similarity between any two samples from D. A sample from D will usually contain a single object, but working with similarities between sets of objects is useful for some applications. We formalize a generative process as a probability distribution over D that depends on parameter vector θ. Suppose that s1 and s2 are samples from D. We consider two hypotheses: H1 holds that s1 and s2 are independent samples from a single generative process, and H2 holds that the samples are generated from two independently chosen processes. Similarity is defined as the probability that the objects are generated by the same process: that is, the relative posterior probability of H1 compared to H2: sim(s1, s2) = P (H1|s1, s2)
6377fee5214d9ace4ce629c9bfe463bdebbd889f
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
87f8bcae68df7ba371baec5d0a2283ecb366b0fc
A rational model of human categorization behavior is presented that assumes that categorization reflects the derivation of optimal estimates of the probability of unseen features of objects. A Bayesian analysis is performed of what optimal estimations would be if categories formed a disjoint partitioning of the object space and if features were independently displayed within a category. This Bayesian analysis is placed within an incremental categorization algorithm. The resulting rational model accounts for effects of central tendency of categories, effects of specific instances, learning of linearly nonseparable categories, effects of category labels, extraction of basic level categories, base-rate effects, probability matching in categorization, and trial-by-trial learning functions. Although the rational model considers just I level of categorization, it is shown how predictions can be enhanced by considering higher and lower levels. Considering prediction at the lower, individual level allows integration of this rational analysis of categorization with the earlier rational analysis of memory (Anderson & Milson, 1989).
e742d8d7cdbef9393af36495137088cc7ca4e5d5
OBJECTIVE Informal caregivers often experience psychological distress due to the changing functioning of the person with dementia they care for. Improved understanding of the person with dementia reduces psychological distress. To enhance understanding and empathy in caregivers, an innovative technology virtual reality intervention Through the D'mentia Lens (TDL) was developed to experience dementia, consisting of a virtual reality simulation movie and e-course. A pilot study of TDL was conducted. METHODS A pre-test-post-test design was used. Informal caregivers filled out questionnaires assessing person-centeredness, empathy, perceived pressure from informal care, perceived competence and quality of the relationship. At post-test, additional questions about TDL's feasibility were asked. RESULTS Thirty-five caregivers completed the pre-test and post-test. Most participants were satisfied with TDL and stated that TDL gave more insight in the perception of the person with dementia. The simulation movie was graded 8.03 out of 10 and the e-course 7.66. Participants significantly improved in empathy, confidence in caring for the person with dementia, and positive interactions with the person with dementia. CONCLUSION TDL is feasible for informal caregivers and seems to lead to understanding of and insight in the experience of people with dementia. Therefore, TDL could support informal caregivers in their caregiving role.
e8455cd00dd7513800bce5aa028067de7138f53d
Single pole double throw (SPDT) switches are becoming more and more key components in phased-array radar transmit/receive modules. An SPDT switch must be able to handle the output power of a high power amplifier and must provide enough isolation to protect the low noise amplifier in the receive chain when the T/R module is transmitting. Therefore gallium nitride technology seems to become a key technology for high power SPDT switch design. The technology shows good performance on microwave frequencies and is able to handle high power. An X-band SPDT switch, with a linear power handling of over 25 W, has been designed, measured and evaluated. The circuit is designed in the coplanar waveguide AlGaN/GaN technology established at QinetiQ.
b0d343ad82eb4060f016ff39289eacb222c45632
The performance of deep learning based semantic segmentation models heavily depends on sufficient data with careful annotations. However, even the largest public datasets only provide samples with pixel-level annotations for rather limited semantic categories. Such data scarcity critically limits scalability and applicability of semantic segmentation models in real applications. In this paper, we propose a novel transferable semi-supervised semantic segmentation model that can transfer the learned segmentation knowledge from a few strong categories with pixel-level annotations to unseen weak categories with only image-level annotations, significantly broadening the applicable territory of deep segmentation models. In particular, the proposed model consists of two complementary and learnable components: a Label transfer Network (L-Net) and a Prediction transfer Network (PNet). The L-Net learns to transfer the segmentation knowledge from strong categories to the images in the weak categories and produces coarse pixel-level semantic maps, by effectively exploiting the similar appearance shared across categories. Meanwhile, the P-Net tailors the transferred knowledge through a carefully designed adversarial learning strategy and produces refined segmentation results with better details. Integrating the L-Net and P-Net achieves 96.5% and 89.4% performance of the fully-supervised baseline using 50% and 0% categories with pixel-level annotations respectively on PASCAL VOC 2012. With such a novel transfer mechanism, our proposed model is easily generalizable to a variety of new categories, only requiring image-level annotations, and offers appealing scalability in real applications.
3d07718300d4a59482c3f3baafaa696d28a4e027
Smart homes can apply new Internet-Of-Things concepts along with RFID technologies for creating ubiquitous services. This paper introduces a novel read-out method for a hierarchical wireless master-slave RFID reader architecture of multi standard NFC (Near Field Communication) and UHF (Ultra High Frequency) technologies to build a smart home service system that benefits in terms of cost, energy consumption and complexity. Various smart home service use cases such as washing programs, cooking, shopping and elderly health care are described as examples that make use of this system.
5f0806351685bd999699399ea9553c91733ccb7d
Article history: Received 22 July 2008 Received in revised form 23 February 2009 Accepted 14 May 2009
e5aaaac7852df686c35e61a6c777cfcb2246c726
For the synthetic aperture radar systems, there is an increasing demand for achieving resolutions of 0.2 mtimes0.2 m. As the range resolution and system bandwidth are inversely proportional, the system bandwidth is expected to be greater than 1 GHz. Thus an antenna with wider band needs to be developed. Waveguide slot antennas have been implemented on several SAR satellites due to its inherent advantages such as high efficiency and power handling capacity, but its bandwidth is quite limited. To avoid the manufacture difficulties of the ridge waveguide, which is capable of broadening the bandwidth of slot antennas, a novel antenna element with conventional waveguide is designed. The bandwidth of VSWR les1.5 is greater than 1 GHz at X-band. To reduce the mutual coupling of closely placed antenna elements, a decoupling method with cavity-like walls inserted between adjacent elements is adopted and their effects on the performance of the antenna are summarized.
f27ef9c1ff0b00ee46beb1bed2f34002bae728ac
7224d949cd34082b1249e8be84fde65b2c6b34fd
We propose a demonstration of Cayuga, a complex event monitoring system for high speed data streams. Our demonstration will show Cayuga applied to monitoring Web feeds; the demo will illustrate the expressiveness of the Cayuga query language, the scalability of its query processing engine to high stream rates, and a visualization of the internals of the query processing engine.
96e7561bd99ed9f607440245451038aeda8d8075
212d1c7cfad4d8dae39deb669337cb46b0274d78
When querying databases, users often wish to express vague concepts, as for instance asking for the cheap hotels. This has been extensively studied in the case of relational databases. In this paper, we propose to study how such useful techniques can be adapted to NoSQL graph databases where the role of fuzziness is crucial. Such databases are indeed among the fastest-growing models for dealing with big data, especially when dealing with network data (e.g., social networks). We consider the Cypher declarative query language proposed for Neo4j which is the current leader on this market, and we present how to express fuzzy queries.
0ecb87695437518a3cc5e98f0b872fbfaeeb62be
Internet and internet users are increasing day by day. Also due to rapid development of internet technology , security is becoming big issue. Intruders are monitoring comput er network continuously for attacks. A sophisticated firewall with efficient intrusion detection system (IDS) is required to pre vent computer network from attacks. A comprehensive study of litera tures proves that data mining techniques are more powerful techni que to develop IDS as a classifier. Performance of classif ier is a crucial issue in terms of its efficiency, also number of fe ature to be scanned by the IDS should also be optimized. In thi s paper two techniques C5.0 and artificial neural network (ANN) ar e utilized with feature selection. Feature selection techniques will discard some irrelevant features while C5.0 and ANN acts as a classifier to classify the data in either normal type or one of t he five types of attack.KDD99 data set is used to train and test the models ,C5.0 model with numbers of features is producing better r sults with all most 100% accuracy. Performances were also verified in terms of data partition size.
8b8788ac5a01280c6484b30cac7a14894f29edf7
Metamaterials are typically engineered by arranging a set of small scatterers or apertures in a regular array throughout a region of space, thus obtaining some desirable bulk electromagnetic behavior. The desired property is often one that is not normally found naturally (negative refractive index, near-zero index, etc.). Over the past ten years, metamaterials have moved from being simply a theoretical concept to a field with developed and marketed applications. Three-dimensional metamaterials can be extended by arranging electrically small scatterers or holes into a two-dimensional pattern at a surface or interface. This surface version of a metamaterial has been given the name metasurface (the term metafilm has also been employed for certain structures). For many applications, metasurfaces can be used in place of metamaterials. Metasurfaces have the advantage of taking up less physical space than do full three-dimensional metamaterial structures; consequently, metasurfaces offer the possibility of less-lossy structures. In this overview paper, we discuss the theoretical basis by which metasurfaces should be characterized, and discuss their various applications. We will see how metasurfaces are distinguished from conventional frequency-selective surfaces. Metasurfaces have a wide range of potential applications in electromagnetics (ranging from low microwave to optical frequencies), including: (1) controllable “smart” surfaces, (2) miniaturized cavity resonators, (3) novel wave-guiding structures, (4) angular-independent surfaces, (5) absorbers, (6) biomedical devices, (7) terahertz switches, and (8) fluid-tunable frequency-agile materials, to name only a few. In this review, we will see that the development in recent years of such materials and/or surfaces is bringing us closer to realizing the exciting speculations made over one hundred years ago by the work of Lamb, Schuster, and Pocklington, and later by Mandel'shtam and Veselago.
63213d080a43660ac59ea12e3c35e6953f6d7ce8
In this work, we introduce a new video representation for action classification that aggregates local convolutional features across the entire spatio-temporal extent of the video. We do so by integrating state-of-the-art two-stream networks [42] with learnable spatio-temporal feature aggregation [6]. The resulting architecture is end-to-end trainable for whole-video classification. We investigate different strategies for pooling across space and time and combining signals from the different streams. We find that: (i) it is important to pool jointly across space and time, but (ii) appearance and motion streams are best aggregated into their own separate representations. Finally, we show that our representation outperforms the two-stream base architecture by a large margin (13% relative) as well as outperforms other baselines with comparable base architectures on HMDB51, UCF101, and Charades video classification benchmarks.
b2e83112b2956483c6cc5982b56f5987788dd973
The design of a multiband reflector antenna for an on-the-move satellite communications terminal is presented. This antenna was designed to operate with numerous modern and future military communications satellites, which in turn requires the antenna to be capable of operation at multiple frequencies and polarizations while maintaining high aperture efficiency. Several feed antenna concepts were developed to accomplish this task, and are discussed in detail. Multiple working prototypes based on this design have been realized, with excellent performance. Measured data of individual antenna components and the complete assembly is also included
1dc697ae0d6a1e90dc8ff061e36441b6efdcff7e
We present an iterative linear-quadratic-Gaussian method for locally-optimal feedback control of nonlinear stochastic systems subject to control constraints. Previously, similar methods have been restricted to deterministic unconstrained problems with quadratic costs. The new method constructs an affine feedback control law, obtained by minimizing a novel quadratic approximation to the optimal cost-to-go function. Global convergence is guaranteed through a Levenberg-Marquardt method; convergence in the vicinity of a local minimum is quadratic. Performance is illustrated on a limited-torque inverted pendulum problem, as well as a complex biomechanical control problem involving a stochastic model of the human arm, with 10 state dimensions and 6 muscle actuators. A Matlab implementation of the new algorithm is availabe at www.cogsci.ucsd.edu//spl sim/todorov.
3a68b92df71637d2ba0ecc1cde8cfe5b29f2d709
The Lucia comprehension system attempts to model human comprehension by using the Soar cognitive architecture, Embodied Construction Grammar (ECG), and an incremental, word-by-word approach to grounded processing. Traditional approaches use techniques such as parallel paths and global optimization to resolve ambiguities. Here we describe how Lucia deals with lexical, grammatical, structural, and semantic ambiguities by using knowledge from the surrounding linguistic and environmental context. It uses a local repair mechanism to maintain a single path, and shows a garden path effect when local repair breaks down. Data on adding new linguistic knowledge shows that the ECG grammar grows faster than the knowledge for handling context, and that lowlevel grammar items grow faster than more general ones.
0fbb184871bd7660bc579178848d58beb8288b7d
We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms.
14815c67e4d215acf9558950e2762759229fe277
Given a real world graph, how should we lay-out its edges? How can we compress it? These questions are closely related, and the typical approach so far is to find clique-like communities, like the `cavemen graph', and compress them. We show that the block-diagonal mental image of the `cavemen graph' is the wrong paradigm, in full agreement with earlier results that real world graphs have no good cuts. Instead, we propose to envision graphs as a collection of hubs connecting spokes, with super-hubs connecting the hubs, and so on, recursively. Based on the idea, we propose the Slash Burn method (burn the hubs, and slash the remaining graph into smaller connected components). Our view point has several advantages: (a) it avoids the `no good cuts' problem, (b) it gives better compression, and (c) it leads to faster execution times for matrix-vector operations, which are the back-bone of most graph processing tools. Experimental results show that our Slash Burn method consistently outperforms other methods on all datasets, giving good compression and faster running time.
5f09cb313b6fb14877c6b5be79294faf1f4f7f02
The relationship between information systems (IS) and organizational strategies has been a much discussed topic with most of the prior studies taking a highly positive view of technology’s role in enabling organizational strategies. Despite this wealth of studies, there is a dearth of empirical investigations on how IS enable specific organizational strategies. Through a qualitative empirical investigation of five case organizations this research derives five organizational strategies that are specifically enabled through IS. The five strategies; (i) generic-heartland, (ii) craft-based selective, (iii) adhoc, IT-driven, (iv) corporative-orchestrated and (v) transformative provide a unique perspective of how IS enable organizational strategy.
ad384ff98f002c16ccdb8264a631068f2c3287f2
The blockchain initially gained traction in 2008 as the technology underlying Bitcoin [105], but now has been employed in a diverse range of applications and created a global market worth over $150B as of 2017. What distinguishes blockchains from traditional distributed databases is the ability to operate in a decentralized setting without relying on a trusted third party. As such their core technical component is consensus: how to reach agreement among a group of nodes. This has been extensively studied already in the distributed systems community for closed systems, but its application to open blockchains has revitalized the field and led to a plethora of new designs. The inherent complexity of consensus protocols and their rapid and dramatic evolution makes it hard to contextualize the design landscape. We address this challenge by conducting a systematic and comprehensive study of blockchain consensus protocols. After first discussing key themes in classical consensus protocols, we describe: (i) protocols based on proof-of-work (PoW), (ii) proof-of-X (PoX) protocols that replace PoW with more energy-efficient alternatives, and (iii) hybrid protocols that are compositions or variations of classical consensus protocols. We develop a framework to evaluate their performance, security and design properties, and use it to systematize key themes in the protocol categories described above. This evaluation leads us to identify research gaps and challenges for the community to consider in future research endeavours.
aaaea1314570b6b692ff3cce3715ec9dada7c7aa
A low-cost, fully-integrated antenna-in-package solution for 60 GHz phased-array systems is demonstrated. Sixteen patch antennas are integrated into a 28 mm × 28 mm ball grid array together with a flip-chip attached transmitter or receiver IC. The packages have been implemented using low temperature co-fired ceramic technology. 60 GHz interconnects, including flip-chip transitions and via structures, are optimized using full-wave simulation. Anechoic chamber measurement has shown ~ 5 dBi unit antenna gain across all four IEEE 802.15.3c channels, achieving excellent model-to-hardware correlation. The packaged transmitter and receiver ICs, mounted on evaluation boards, have demonstrated beam-steered, non-line-of-sight links with data rates up to 5.3 Gb/s.
0d3f6d650b1a878d5896e3b85914aeaeb9d78a4f
An overview of the key challenges facing the practice of medicine today is presented along with the need for technological solutions that can "prevent" problems. Then, the development of the Wearable Motherboard/spl trade/ (Smart Shirt) as a platform for sensors and monitoring devices that can unobtrusively monitor the health and well being of individuals (directly and/or remotely) is described. This is followed by a discussion of the applications and impact of this technology in the continuum of life-from preventing SIDS to facilitating independent living for senior citizens. Finally, the future advancements in the area of wearable, yet comfortable, systems that can continue the transformation of healthcare - all aimed at enhancing the quality of life for humans - are presented.
2a4a7d37babbab47ef62a60d9f0ea2cfa979cf08
The localization problem is to determine an assignment of coordinates to nodes in a wireless ad-hoc or sensor network that is consistent with measured pairwise node distances. Most previously proposed solutions to this problem assume that the nodes can obtain pairwise distances to other nearby nodes using some ranging technology. However, for a variety of reasons that include obstructions and lack of reliable omnidirectional ranging, this distance information is hard to obtain in practice. Even when pairwise distances between nearby nodes are known, there may not be enough information to solve the problem uniquely. This paper describes MAL, a mobile-assisted localization method which employs a mobile user to assist in measuring distances between node pairs until these distance constraints form a "globally rigid'* structure that guarantees a unique localization. We derive the required constraints on the mobile's movement and the minimum number of measurements it must collect; these constraints depend on the number of nodes visible to the mobile in a given region. We show how to guide the mobile's movement to gather a sufficient number of distance samples for node localization. We use simulations and measurements from an indoor deployment using the Cricket location system to investigate the performance of MAL, finding in real-world experiments that MAL's median pairwise distance error is less than 1.5% of the true node distance.
e42838d321ece2ef7f8399c54d4dd856bfdbe4a4
The work presented in the paper focuses on accuracy of models for broad-band ferrite based coaxial transmission-line transformers. Soft-ferrites are largely used in VHF/UHF components allowing band enlargement on the low-edge side. Degradation of frequency performance on the high-edge side are produced both by ferrite losses, and by parasitic capacitance due to connection to the thermal and electrical ground in high power applications. Both a circuital model for low-power applications and a scalable e.m. model for high-power applications are presented and discussed.
536c6d5e59a05da27153303a19e0274262affdcd
To understand the dynamics of optimization in deep neural networks, we develop a tool to study the evolution of the entire Hessian spectrum throughout the optimization process. Using this, we study a number of hypotheses concerning smoothness, curvature, and sharpness in the deep learning literature. We then thoroughly analyze a crucial structural feature of the spectra: in non-batch normalized networks, we observe the rapid appearance of large isolated eigenvalues in the spectrum, along with a surprising concentration of the gradient in the corresponding eigenspaces. In batch normalized networks, these two effects are almost absent. We characterize these effects, and explain how they affect optimization speed through both theory and experiments. As part of this work, we adapt advanced tools from numerical linear algebra that allow scalable and accurate estimation of the entire Hessian spectrum of ImageNet-scale neural networks; this technique may be of independent interest in other applications.
8acf78df5aa283f02d3805867e1dd1c6a97f389b
Article history: Received 27 August 2010 Received in revised form 23 December 2010 Accepted 6 January 2011 The traditional audit paradigm is outdated in the real time economy. Innovation of the traditional audit process is necessary to support real time assurance. Practitioners and academics are exploring continuous auditing as a potential successor to the traditional audit paradigm. Using technology and automation, continuous auditing methodology enhances the efficiency and effectiveness of the audit process to support real time assurance. This paper defines how continuous auditing methodology introduces innovation to practice in seven dimensions and proposes a four-stage paradigm to advance future research. In addition, we formulate a set of methodological propositions concerning the future of assurance for practitioners and academic researchers. © 2011 Elsevier Inc. All rights reserved.
45fcbb3149fdb01a130f5f013a4713328ee3e3c7
Modeling Narrative Discourse
b3dbdd8859e9a38712816dddf221843a5cae95a8
Despite the burgeoning research on social entrepreneurship (SE), SE strategies remain poorly understood. Drawing on extant research on the social activism and social change, empowerment and SE models, we explore, classify and validate the strategies used by 2,334 social entrepreneurs affiliated with the world's largest SE support organization, Ashoka. The results of the topic modeling of the social entrepreneurs' strategy profiles reveal that they employed a total of 39 change-making strategies that vary across resources (material versus symbolic strategies), specificity (general versus specific strategies), and mode of participation (mass versus elite participation strategies); they also vary across fields of practice and time. Finally, we identify six meta-SE strategies-a reduction from the 39 strategies-and identify four new meta-SE strategies (i.e., system reform, physical capital development, evidence-based practices, and prototyping) that have been overlooked in prior SE research. Our findings extend and deepen the research into SE strategies and offer a comprehensive model of SE strategies that advances theory, practice and policy making.
7ca3809484eb57c509acc18b016e9b010759dfa1
Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constrained nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image editing tasks on real images.
c1742ca74f40c44dae2af6a992e569edc969c62c
This paper presents an overview of the possibility offered by 3D plastic printers for a quick, simple and affordable manufacturing of working filters and other passive devices such as antennas. This paper thus goes through numerous examples of passive devices made with the Fused Deposition Modeling (FDM) and material jetting (Polyjet©) technologies and will highlight how they can now be considered as a solid companion to RF designers during an optimization process up to Ku and higher bands.