_id
stringlengths
40
40
text
stringlengths
0
10k
26d9c40e8a6099ce61a5d9a6afa11814c45def01
We propose a novel approach to constrained path planning that is based on a special search space which efficiently encodes feasible paths. The paths are encoded implicitly as connections between states, but only feasible and local connections are included. Once this search space is developed, we systematically generate a near-minimal set of spatially distinct path primitives. This set expresses the local connectivity of constrained motions and also eliminates redundancies. The set of primitives is used to define heuristic search, and thereby create a very efficient path planner at the chosen resolution. We also discuss a wide variety of space and terrestrial robotics applications where this motion planner can be especially useful.
51fea461cf3724123c888cb9184474e176c12e61
Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is faster because it examines far fewer potential matches between the images than existing techniques. Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show show our technique can be adapted for use in a stereo vision system.
91a613ed06c4654f38f5c2e7fe6ebffeec53d887
Extreme learning machine (ELM) is a competitive machine learning technique, which is simple in theory and fast in implementation. The network types are ‘‘generalized’’ single hidden layer feedforward networks, which are quite diversified in the form of variety in feature mapping functions or kernels. To deal with data with imbalanced class distribution, a weighted ELM is proposed which is able (1) it is simple in theory and convenient in implementation; (2) a wide type of feature mapping functions or kernels are available for the proposed framework; (3) the proposed method can be applied directly into multiclass classification tasks. In addition, after integrating with the weighting scheme, (1) the weighted ELM is able to deal with data with imbalanced class distribution while maintain the good performance on well balanced data as unweighted ELM; (2) by assigning different weights for each example according to users’ needs, the weighted ELM can be generalized to cost sensitive learning. & 2012 Elsevier B.V. All rights reserved.
055d55726d45406a6f115c4d26f510bade021be3
The project aims to build a monocular vision autonomous car prototype using Raspberry Pi as a processing chip. An HD camera along with an ultrasonic sensor is used to provide necessary data from the real world to the car. The car is capable of reaching the given destination safely and intelligently thus avoiding the risk of human errors. Many existing algorithms like lane detection, obstacle detection are combined together to provide the necessary control to the car.
b39e5f7217abae9e2c682ee5068a11309631b93b
Moving objects are becoming increasingly attractive to the data mining community due to continuous advances in technologies like GPS, mobile computers, and wireless communication devices. Mining spatio-temporal data can benefit many different functions: marketing team managers for identifying the right customers at the right time, cellular companies for optimizing the resources allocation, web site administrators for data allocation matters, animal migration researchers for understanding migration patterns, and meteorology experts for weather forecasting. In this research we use a compact representation of a mobile trajectory and define a new similarity measure between trajectories. We also propose an incremental clustering algorithm for finding evolving groups of similar mobile objects in spatio-temporal data. The algorithm is evaluated empirically by the quality of object clusters (using Dunn and Rand indexes), memory space efficiency, execution times, and scalability (run time vs. number of objects).
2c3dffc38d40b725bbd2af80694375e6fc0b1b45
Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.
8dc7cc939af832d071c2a050fd0284973ac70695
Computational RFID (CRFID) platforms have enabled reconfigurable, battery-free applications for close to a decade. However, several factors have impeded their wide-spread adoption: low communication range, low throughput, and expensive infrastructure— CRFID readers usually cost upwards of $1000. This paper presents LoRea, a backscatter reader that achieves an order of magnitude higher range than existing CRFID readers, while costing a fraction of their price. LoRea achieves this by diverging from existing designs of CRFID readers and more specifically, how self-interference is tackled. LoRea builds on recent work that frequency-shifts backscatter transmissions away from the carrier signal to reduce selfinterference. LoRea also decouples the carrier generation from the reader, helping to further decrease self-interference. Decoupling carrier generation also enables the use of deployed infrastructure of smartphones and sensor nodes to provide the carrier signal. Together these methods reduce cost and complexity of the reader. LoRea also purposefully operates at lower bitrates than recent backscatter systems which enables high sensitivity and longer range. We evaluate LoRea experimentally and find that it achieves a communication range of up to 225 m in lineof-sight scenarios. In indoor environments, where the signal traverses several walls separating the reader and the backscatter tag, LoRea achieves a range of 30 m. These results illustrate how LoRea outperforms stateof-art backscatter systems and CRFID platforms.
b348042a91beb4fa0c60fd94f27cf0366d5f9630
The safety of the planned paths of autonomous cars with respect to the movement of other traffic participants is considered. Therefore, the stochastic occupancy of the road by other vehicles is predicted. The prediction considers uncertainties originating from the measurements and the possible behaviors of other traffic participants. In addition, the interaction of traffic participants, as well as the limitation of driving maneuvers due to the road geometry, is considered. The result of the presented approach is the probability of a crash for a specific trajectory of the autonomous car. The presented approach is efficient as most of the intensive computations are performed offline, which results in a lean online algorithm for real-time application.
f69c83aab19183795af7612c3f224b5e116f242a
fda1e13a2eaeaa0b4434833d3ee0eb8e79b0ba94
One of the fundamental human cognitive processes is problem solving. As a higher-layer cognitive process, problem solving interacts with many other cognitive processes such as abstraction, searching, learning, decision making, inference, analysis, and synthesis on the basis of internal knowledge representation by the object–attribute-relation (OAR) model. Problem solving is a cognitive process of the brain that searches a solution for a given problem or finds a path to reach a given goal. When a problem object is identified, problem solving can be perceived as a search process in the memory space for finding a relationship between a set of solution goals and a set of alternative paths. This paper presents both a cognitive model and a mathematical model of the problem solving process. The cognitive structures of the brain and the mechanisms of internal knowledge representation behind the cognitive process of problem solving are explained. The cognitive process is formally described using real-time process algebra (RTPA) and concept algebra. This work is a part of the cognitive computing project that designed to reveal and simulate the fundamental mechanisms and processes of the brain according to Wang’s layered reference model of the brain (LRMB), which is expected to lead to the development of future generation methodologies for cognitive computing and novel cognitive computers that are capable of think, learn, and perceive. 2008 Published by Elsevier B.V.
f6284d750cf12669ca3bc12a1b485545af776239
Over the last few years, deep learning techniques have yielded significant improvements in image inpainting. However, many of these techniques fail to reconstruct reasonable structures as they are commonly over-smoothed and/or blurry. This paper develops a new approach for image inpainting that does a better job of reproducing filled regions exhibiting fine details. We propose a two-stage adversarial model EdgeConnect that comprises of an edge generator followed by an image completion network. The edge generator hallucinates edges of the missing region (both regular and irregular) of the image, and the image completion network fills in the missing regions using hallucinated edges as a priori. We evaluate our model end-to-end over the publicly available datasets CelebA, Places2, and Paris StreetView, and show that it outperforms current state-ofthe-art techniques quantitatively and qualitatively.
04f4679765d2f71576dd77c1b00a2fd92e5c6da4
Current fine-grained classification approaches often rely on a robust localization of object parts to extract localized feature representations suitable for discrimination. However, part localization is a challenging task due to the large variation of appearance and pose. In this paper, we show how pre-trained convolutional neural networks can be used for robust and efficient object part discovery and localization without the necessity to actually train the network on the current dataset. Our approach called “part detector discovery” (PDD) is based on analyzing the gradient maps of the network outputs and finding activation centers spatially related to annotated semantic parts or bounding boxes. This allows us not just to obtain excellent performance on the CUB2002011 dataset, but in contrast to previous approaches also to perform detection and bird classification jointly without requiring a given bounding box annotation during testing and ground-truth parts during training. The code is available at http://www.inf-cv.uni-jena.de/part_ discovery and https://github.com/cvjena/PartDetectorDisovery.
9f3f6a33eb412d508da319bb270112075344abd0
We propose a novel probabilistic technique for modeling and extracting salient structure from large document collections. As in clustering and topic modeling, our goal is to provide an organizing perspective into otherwise overwhelming amounts of information. We are particularly interested in revealing and exploiting relationships between documents. To this end, we focus on extracting diverse sets of threads—singlylinked, coherent chains of important documents. To illustrate, we extract research threads from citation graphs and construct timelines from news articles. Our method is highly scalable, running on a corpus of over 30 million words in about four minutes, more than 75 times faster than a dynamic topic model. Finally, the results from our model more closely resemble human news summaries according to several metrics and are also preferred by human judges.
057d5f66a873ec80f8ae2603f937b671030035e6
In this paper, we study the challenging problem of predicting the dynamics of objects in static images. Given a query object in an image, our goal is to provide a physical understanding of the object in terms of the forces acting upon it and its long term motion as response to those forces. Direct and explicit estimation of the forces and the motion of objects from a single image is extremely challenging. We define intermediate physical abstractions called Newtonian scenarios and introduce Newtonian Neural Network (N3) that learns to map a single image to a state in a Newtonian scenario. Our evaluations show that our method can reliably predict dynamics of a query object from a single image. In addition, our approach can provide physical reasoning that supports the predicted dynamics in terms of velocity and force vectors. To spur research in this direction we compiled Visual Newtonian Dynamics (VIND) dataset that includes more than 6000 videos aligned with Newtonian scenarios represented using game engines, and more than 4500 still images with their ground truth dynamics.
c4f7d2ca3105152e5be77d36add2582977649b1d
The Internet of Things (IoT) continues to grow as uniquely identifiable objects are added to the internet. The addition of these devices, and their remote connectivity, has brought a new level of efficiency into our lives. However, the security of these devices has come into question. While many may be secure, the sheer number creates an environment where even a small percentage of insecure devices may create significant vulnerabilities. This paper evaluates some of the emerging vulnerabilities that exist and puts some figures to the scale of the threat.
8671518a43bc7c9d5446b49640ee8783d5b580d7
e5f67b995b09e750bc1a32293d5a528de7f601a9
As modern systems become increasingly complex, current security practices lack effective methodologies to adequately address the system security. This paper proposes a repeatable and tailorable framework to assist in the application of systems security engineering (SSE) processes, activities, and tasks as defined in the recently released National Institute of Standards and Technology (NIST) Special Publication 800–160. First, a brief survey of systems-oriented security methodologies is provided. Next, an examination of the relationships between the NIST-defined SSE processes is conducted to provide context for the engineering problem space. These findings inform a mapping of the NIST SSE processes to seven system-agnostic security domains which enable prioritization for three types of systems (conventional IT, cyber-physical, and defense). These concrete examples provide further understanding for applying and prioritizing the SSE effort. The goal of this paper is assist practitioners by informing the efficient application of the 30 processes, 111 activities, and 428 tasks defined in NIST SP 800–160. The customizable framework tool is available online for developers to employ, modify, and tailor to meet their needs.
1beeb25756ea352634e0c78ed653496a3474925e
fac5a9a18157962cff38df6d4ae69f8a7da1cfa8
In this paper, we propose two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information. Our methods are based on the result which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. We provide two methods to estimate the spherical harmonic basis images spanning this space from just one image. Our first method builds the statistical model based on a collection of 2D basis images. We demonstrate that, by using the learned statistics, we can estimate the spherical harmonic basis images from just one image taken under arbitrary illumination conditions if there is no pose variation. Compared to the first method, the second method builds the statistical models directly in 3D spaces by combining the spherical harmonic illumination representation and a 3D morphable model of human faces to recover basis images from images across both poses and illuminations. After estimating the basis images, we use the same recognition scheme for both methods: we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our methods achieve comparable levels of accuracy with methods that have much more onerous training data requirements. Comparison of the two methods is also provided.
6831db33ea9db905b66b09f476c429f085ebb45f
The present study describes the development of a triaxial accelerometer (TA) and a portable data processing unit for the assessment of daily physical activity. The TA is composed of three orthogonally mounted uniaxial piezoresistive accelerometers and can be used to register accelerations covering the amplitude and frequency ranges of human body acceleration. Interinstrument and test-retest experiments showed that the offset and the sensitivity of the TA were equal for each measurement direction and remained constant on two measurement days. Transverse sensitivity was significantly different for each measurement direction, but did not influence accelerometer output (<3% of the sensitivity along the main axis). The data unit enables the on-line processing of accelerometer output to a reliable estimator of physical activity over eight-day periods. Preliminary evaluation of the system in 13 male subjects during standardized activities in the laboratory demonstrated a significant relationship between accelerometer output and energy expenditure due to physical activity, the standard reference for physical activity (r=0.89). Shortcomings of the system are its low sensitivity to sedentary activities and the inability to register static exercise. The validity of the system for the assessment of normal daily physical activity and specific activities outside the laboratory should be studied in free-living subjects.
f6cd444c939c0b5c08b07bb35fd694a45e07b97e
The latched comparator is utilized in virtually all analog-to-digital converter architectures. It uses a positive feedback mechanism to regenerate the analog input signal into a full-scale digital level. Such high voltage variations in the regeneration nodes are coupled to the input voltage - kickback noise. This paper reviews existing solutions to minimize the kickback noise and proposes two new ones. HSPICE simulations verify the effectiveness of our techniques.
043afbd936c95d0e33c4a391365893bd4102f1a7
Large deep neural network models have recently demonstrated state-of-the-art accuracy on hard visual recognition tasks. Unfortunately such models are extremely time consuming to train and require large amount of compute cycles. We describe the design and implementation of a distributed system called Adam comprised of commodity server machines to train such models that exhibits world-class performance, scaling and task accuracy on visual recognition tasks. Adam achieves high efficiency and scalability through whole system co-design that optimizes and balances workload computation and communication. We exploit asynchrony throughout the system to improve performance and show that it additionally improves the accuracy of trained models. Adam is significantly more efficient and scalable than was previously thought possible and used 30x fewer machines to train a large 2 billion connection model to 2x higher accuracy in comparable time on the ImageNet 22,000 category image classification task than the system that previously held the record for this benchmark. We also show that task accuracy improves with larger models. Our results provide compelling evidence that a distributed systems-driven approach to deep learning using current training algorithms is worth pursuing.
63d630482d59e83449f73b51c0efb608e662d3ef
Printed electronics are considered for wireless electronic tags and sensors within the future Internet-of-things (IoT) concept. As a consequence of the low charge carrier mobility of present printable organic and inorganic semiconductors, the operational frequency of printed rectifiers is not high enough to enable direct communication and powering between mobile phones and printed e-tags. Here, we report an all-printed diode operating up to 1.6 GHz. The device, based on two stacked layers of Si and NbSi2 particles, is manufactured on a flexible substrate at low temperature and in ambient atmosphere. The high charge carrier mobility of the Si microparticles allows device operation to occur in the charge injection-limited regime. The asymmetry of the oxide layers in the resulting device stack leads to rectification of tunneling current. Printed diodes were combined with antennas and electrochromic displays to form an all-printed e-tag. The harvested signal from a Global System for Mobile Communications mobile phone was used to update the display. Our findings demonstrate a new communication pathway for printed electronics within IoT applications.
c15c068ac4b639646a74ad14fc994016f8925901
A-Si:H TFTs are traditionally used in backplane arrays for active matrix displays and occasionally in row or column drive electronics with current efforts focusing on flexible displays and drivers. This paper extends flexible electronics to complex digital circuitry by designing a standard cell library for a-Si:H TFTs on flexible stainless steel and plastic substrates. The standard cell library enables layout automation with a standard cell place and route tool, significantly speeding the layout of a-Si:H digital circuits on the backplane to enhance display functionality. Since only n-channel transistors are available, the gates are designed with a bootstrap pull-up network to ensure good output voltage swings. The library developed consists of 7 gates: 5 combinational gates (Inverter, NAND2, NOR2, NOR3, and MUX2) and 2 sequential gates (latch and ‘D’ flip-flop). Test structures have been fabricated to experimentally characterize the delay vs. fan-out of the standard cells. Automatic extraction of electrical interconnections from layout, enabling layout versus schematic (LVS) has also incorporated into the existing tool suite for bottom gate a-Si:H TFTs. A 3bit counter was designed, fabricated and tested to demonstrate the standard cell library is described.
d1bf0962711517cff15205b1844d6b8d625ca7da
Social entrepreneurship, as a practice and a field for scholarly investigation, provides a unique opportunity to challenge, question, and rethink concepts and assumptions from different fields of management and business research. This article puts forward a view of social entrepreneurship as a process that catalyzes social change and addresses important social needs in a way that is not dominated by direct financial benefits for the entrepreneurs. Social entrepreneurship is seen as differing from other forms of entrepreneurship in the relatively higher priority given to promoting social value and development versus capturing economic value. To stimulate future research the authors introduce the concept of embeddedness as a nexus between theoretical perspectives for the study of social entrepreneurship. # 2005 Elsevier Inc. All rights reserved.
58461d01e8b6bd177d26ee17f9cf332cb8ca286a
We introduce space–time block coding , a new paradigm for communication over Rayleigh fading channels using multiple transmit antennas. Data is encoded using a space–time block code and the encoded data is split inton streams which are simultaneously transmitted usingn transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. Maximumlikelihood decoding is achieved in a simple way through decoupling of the signals transmitted from different antennas rather than joint detection. This uses the orthogonal structure of the space–time block code and gives a maximum-likelihood decoding algorithm which is based only on linear processing at the receiver. Space–time block codes are designed to achieve the maximum diversity order for a given number of transmit and receive antennas subject to the constraint of having a simple decoding algorithm. The classical mathematical framework of orthogonal designs is applied to construct space–time block codes. It is shown that space–time block codes constructed in this way only exist for few sporadic values ofn. Subsequently, a generalization of orthogonal designs is shown to provide space–time block codes for both real and complex constellations for any number of transmit antennas. These codes achieve the maximum possible transmission rate for any number of transmit antennas using any arbitrary real constellation such as PAM. For an arbitrary complex constellation such as PSK and QAM, space–time block codes are designed that achieve1=2 of the maximum possible transmission rate for any number of transmit antennas. For the specific cases of two, three, and four transmit antennas, space–time block codes are designed that achieve, respectively, all, 3=4, and 3=4 of maximum possible transmission rate using arbitrary complex constellations. The best tradeoff between the decoding delay and the number of transmit antennas is also computed and it is shown that many of the codes presented here are optimal in this sense as well.
25a7b5d2db857cd86692c45d0e5376088f51aa12
A family of role-based access control (RBAC) models, referred to here as the RBAC96 models, was recently published by the author and his colleagues. This paper gives our rationale for the major decisions in developing these models and discusses alternatives that were considered.
771b52e7c7d0a4ac8b8ee0cdeed209d1c4114480
We present a new programming model for deterministic parallel computation in a pure functional language. The model is monadic and has explicit granularity, but allows dynamic construction of dataflow networks that are scheduled at runtime, while remaining deterministic and pure. The implementation is based on monadic concurrency, which has until now only been used to simulate concurrency in functional languages, rather than to provide parallelism. We present the API with its semantics, and argue that parallel execution is deterministic. Furthermore, we present a complete work-stealing scheduler implemented as a Haskell library, and we show that it performs at least as well as the existing parallel programming models in Haskell.
29cd61d634786dd3b075eeeb06349a98ea0535c6
OBJECTIVE This study investigates associations between food insufficiency and cognitive, academic, and psychosocial outcomes for US children and teenagers ages 6 to 11 and 12 to 16 years. METHODS Data from the Third National Health and Nutrition Examination Survey (NHANES III) were analyzed. Children were classified as food-insufficient if the family respondent reported that his or her family sometimes or often did not get enough food to eat. Regression analyses were conducted to test for associations between food insufficiency and cognitive, academic, and psychosocial measures in general and then within lower-risk and higher-risk groups. Regression coefficients and odds ratios for food insufficiency are reported, adjusted for poverty status and other potential confounding factors. RESULTS After adjusting for confounding variables, 6- to 11-year-old food-insufficient children had significantly lower arithmetic scores and were more likely to have repeated a grade, have seen a psychologist, and have had difficulty getting along with other children. Food-insufficient teenagers were more likely to have seen a psychologist, have been suspended from school, and have had difficulty getting along with other children. Further analyses divided children into lower-risk and higher-risk groups. The associations between food insufficiency and children's outcomes varied by level of risk. CONCLUSIONS The results demonstrate that negative academic and psychosocial outcomes are associated with family-level food insufficiency and provide support for public health efforts to increase the food security of American families.
001ffbeb63dfa6d52e9379dae46e68aea2d9407e
7e383307edacb0bb53e57772fdc1ffa2825eba91
Many problems in real-world applications involve predicting several random variables that are statistically related. Markov random fields (MRFs) are a great mathematical tool to encode such dependencies. The goal of this paper is to combine MRFs with deep learning to estimate complex representations while taking into account the dependencies between the output random variables. Towards this goal, we propose a training algorithm that is able to learn structured models jointly with deep features that form the MRF potentials. Our approach is efficient as it blends learning and inference and makes use of GPU acceleration. We demonstrate the effectiveness of our algorithm in the tasks of predicting words from noisy images, as well as tagging of Flickr photographs. We show that joint learning of the deep features and the MRF parameters results in significant performance gains.
444b9f2fff2132251a43dc4a4f8bd213e7763634
Objective: Our objective is to describe how softwareengineering might benefit from an evidence-basedapproach and to identify the potential difficultiesassociated with the approach.Method: We compared the organisation and technicalinfrastructure supporting evidence-based medicine (EBM)with the situation in software engineering. We consideredthe impact that factors peculiar to software engineering(i.e. the skill factor and the lifecycle factor) would haveon our ability to practice evidence-based softwareengineering (EBSE).Results: EBSE promises a number of benefits byencouraging integration of research results with a view tosupporting the needs of many different stakeholdergroups. However, we do not currently have theinfrastructure needed for widespread adoption of EBSE.The skill factor means software engineering experimentsare vulnerable to subject and experimenter bias. Thelifecycle factor means it is difficult to determine howtechnologies will behave once deployed.Conclusions: Software engineering would benefit fromadopting what it can of the evidence approach providedthat it deals with the specific problems that arise from thenature of software engineering.
cf234668399ff2d7e5e5a54039907b0fa7cf36d3
Three-dimensional hand gesture recognition has attracted increasing research interests in computer vision, pattern recognition, and human-computer interaction. The emerging depth sensors greatly inspired various hand gesture recognition approaches and applications, which were severely limited in the 2D domain with conventional cameras. This paper presents a survey of some recent works on hand gesture recognition using 3D depth sensors. We first review the commercial depth sensors and public data sets that are widely used in this field. Then, we review the state-of-the-art research for 3D hand gesture recognition in four aspects: 1) 3D hand modeling; 2) static hand gesture recognition; 3) hand trajectory gesture recognition; and 4) continuous hand gesture recognition. While the emphasis is on 3D hand gesture recognition approaches, the related applications and typical systems are also briefly summarized for practitioners.
33da83b54410af11d0cd18fd07c74e1a99f67e84
We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be repurposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
b6011390c08d7982bdaecb60822e72ed7c751ea4
609ab78579f2f51e4677715c32d3370899bfd3a7
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a joumal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
7abeaf172af1129556ee8b3fcbb2139172e50bdf
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
93dbcdc45336f4d26575e8273b3d70f7a1a260b2
The field of information systems is premised on the centrality of information technology in everyday socio-economic life. Yet, drawing on a review of the full set of articles published in Information Systems Research (ISR) over the past ten years, we argue that the field has not deeply engaged its core subject matter—the information technology (IT) artifact. Instead, we find that IS researchers tend to give central theoretical significance to the context (within which some usually unspecified technology is seen to operate), the discrete processing capabilities of the artifact (as separable from its context or use), or the dependent variable (that which is posited to be affected or changed as technology is developed, implemented, and used). The IT artifact itself tends to disappear from view, be taken for granted, or is presumed to be unproblematic once it is built and installed. After discussing the implications of our findings, we propose a research direction for the IS field that begins to take technology as seriously as its effects, context, and capabilities. In particular, we propose that IS researchers begin to theorize specifically about IT artifacts, and then incorporate these theories explicitly into their studies. We believe that such a research direction is critical if IS research is to make a significant contribution to the understanding of a world increasingly suffused with ubiquitous, interdependent, and emergent information technologies. (Information Systems Research; Information Technology; IT Research; IT Theory; Technological Artifacts; Technology Change)
bc5e20c9e950a5dcedbe1caacc39afe097e3a6b0
Generalizability is a major concern to those who do, and use, research. Statistical, sampling-based generalizability is well known, but methodologists have long been aware of conceptions of generalizability beyond the statistical. The purpose of this essay is to clarify the concept of generalizability by critically examining its nature, illustrating its use and misuse, and presenting a framework for classifying its different forms. The framework organizes the different forms into four types, which are defined by the distinction between empirical and theoretical kinds of statements. On the one hand, the framework affirms the bounds within which statistical, sampling-based generalizability is legitimate. On the other hand, the framework indicates ways in which researchers in information systems and other fields may properly lay claim to generalizability, and thereby broader relevance, even when their inquiry falls outside the bounds of sampling-based research. (Research Methodology; Positivist Research; Interpretive Research; Quantitative Research; Qualitative Research; Case Studies; Research Design; Generalizability )
2e5f2b57f4c476dd69dc22ccdf547e48f40a994c
14b5e8ba23860f440ea83ed4770e662b2a111119
Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark (Krizhevsky et al., 2012). However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al. on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.
424561d8585ff8ebce7d5d07de8dbf7aae5e7270
State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet <xref ref-type="bibr" rid="ref1">[1]</xref> and Fast R-CNN <xref ref-type="bibr" rid="ref2">[2]</xref> have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a <italic>Region Proposal Network</italic> (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model <xref ref-type="bibr" rid="ref3">[3]</xref> , our detection system has a frame rate of 5 fps (<italic>including all steps</italic>) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
cddb9e0effbc56594049c9e7d788b0df2247b1e5
7908a8d73c9164ddfa6eb3f355494dfc849dc98f
In this paper, briefly the efforts going toward a mathematical calculation to validate a new-Dijkstra algorithm from the original normal Dijkstra algorithm [1] as an improvised Dijkstra. The result of this improvised Dijkstra was used to find the shortest path for firefighting unit to reach the location of the fire exactly. The idea depends on the influence of turns on the path, while there are two equal paths’, the more number of turns take more time to pass through it and the less number turns takes less time to pass through it. To apply this scenario practically we take a small real area in south Khartoum. The result gives strong justification that proves and verifies our methodology with a clear contribution in Improved Dijkstra algorithm for firefighting like Geo-Dijkstra. Furthermore, an evaluation of the above-mentioned algorithms has been done showing very promising and realistic results.
0fccd6c005fc60153afa8d454e056e80cca3102e
Network security technology has become crucial in protecting government and industry computing infrastructure. Modern intrusion detection applications face complex requirements; they need to be reliable, extensible, easy to manage, and have low maintenance cost. In recent years, machine learning-based intrusion detection systems have demonstrated high accuracy, good generalization to novel types of intrusion, and robust behavior in a changing environment. This work aims to compare efficiency of machine learning methods in intrusion detection system, including artificial neural networks and support vector machine, with the hope of providing reference for establishing intrusion detection system in future. Compared with other related works in machine learning-based intrusion detectors, we propose to calculate the mean value via sampling different ratios of normal data for each measurement, which lead us to reach a better accuracy rate for observation data in real world. We compare the accuracy, detection rate, false alarm rate for 4 attack types. The extensive experimental results on the KDD-cup intrusion detection benchmark dataset demonstrate that the proposed approach produces higher performance than KDD Winner, especially for U2R and U2L type attacks.
86ab4cae682fbd49c5a5bedb630e5a40fa7529f6
We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 % error rate and about a 9% reject rate on zipcode digits provided by the U.S. Postal Service.
3c5ba48d25fbe24691ed060fa8f2099cc9eba14f
Despite of the progress achieved by deep learning in face recognition (FR), more and more people find that racial bias explicitly degrades the performance in realistic FR systems. Facing the fact that existing training and testing databases consist of almost Caucasian subjects, there are still no independent testing databases to evaluate racial bias and even no training databases and methods to reduce it. To facilitate the research towards conquering those unfair issues, this paper contributes a new dataset called Racial Faces in-the-Wild (RFW) database with two important uses, 1) racial bias testing: four testing subsets, namely Caucasian, Asian, Indian and African, are constructed, and each contains about 3000 individuals with 6000 image pairs for face verification, 2) racial bias reducing: one labeled training subset with Caucasians and three unlabeled training subsets with Asians, Indians and Africans are offered to encourage FR algorithms to transfer recognition knowledge from Caucasians to other races. For we all know, RFW is the first database for measuring racial bias in FR algorithms. After proving the existence of domain gap among different races and the existence of racial bias in FR algorithms, we further propose a deep information maximization adaptation network (IMAN) to bridge the domain gap, and comprehensive experiments show that the racial bias could be narrowed-down by our algorithm.
0f9b608cd19afeb083e0244df4cd0db1a00e029b
We present a technique for constructing random elds from a set of training samples. The learning paradigm builds increasingly complex elds by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the eld and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random eld models and techniques introduced in this paper di er from those common to much of the computer vision literature in that the underlying random elds are nonMarkovian and have a large number of parameters that must be estimated. Relations to other learning approaches including decision trees and Boltzmann machines are given. As a demonstration of the method, we describe its application to the problem of automatic word classi cation in natural language processing.
5a0e84b72d161ce978bba66bfb0e337b80ea1708
RFIDs are emerging as a vital component of the Internet of Things. In 2012, billions of RFIDs have been deployed to locate equipment, track drugs, tag retail goods, etc. Current RFID systems, however, can only identify whether a tagged object is within radio range (which could be up to tens of meters), but cannot pinpoint its exact location. Past proposals for addressing this limitation rely on a line-of-sight model and hence perform poorly when faced with multipath effects or non-line-of-sight, which are typical in real-world deployments. This paper introduces the first fine-grained RFID positioning system that is robust to multipath and non-line-of-sight scenarios. Unlike past work, which considers multipath as detrimental, our design exploits multipath to accurately locate RFIDs. The intuition underlying our design is that nearby RFIDs experience a similar multipath environment (e.g., reflectors in the environment) and thus exhibit similar multipath profiles. We capture and extract these multipath profiles by using a synthetic aperture radar (SAR) created via antenna motion. We then adapt dynamic time warping (DTW) techniques to pinpoint a tag's location. We built a prototype of our design using USRP software radios. Results from a deployment of 200 commercial RFIDs in our university library demonstrate that the new design can locate misplaced books with a median accuracy of 11~cm.
ddedb83a585a53c8cd6f3277cdeaa367b526173f
79fe72080be951cf096524fd54c33402387c8e8f
The concept of cryptocurrencies is built from forgotten ideas in research literature.
691e9e6f09e8a98b6e81c9d9986605d21c56ca21
bb9419100e83f9257b1ba1658b95554783a13141
Owing to the energy shortage and the increasingly serious environmental pollution, fuel cell electric vehicles (FCEV) with zero-emission and high-efficiency have been expected to be the most potential candidate to substitute the conventional vehicles. The DC/DC converter is the interface between the fuel cell (FC) and the driveline of FCEV. It not only needs high voltage gain to convert the wide FC voltage into an appropriate voltage level, but also needs the capacity of fault tolerance to enhance the reliability of the system. For this reason, floating interleaved boost converters (FIBC) seem to be the optimal selection. Despite this topology can continue operating without interruption under the action of the proper control scheme in the case of power switch open circuit fault (OCF), operating in degraded mode has adverse impacts on the component stress and the input current ripple. Hence, this paper aims to design an effective controller to maintain the dc bus voltage constant and to demonstrate thorough theoretical analysis and simulation verification of these undesirable effects.
ede851351f658426e77c72e7d1989dda970c995a
This paper presents an ultra-broadband ultracompact Butler Matrix design scheme. The design employs stacked transformer based couplers and lumped LC π-network phase shifters for substantial size reduction. As a proof-of-concept design, a 4×4 Butler Matrix is implemented in a standard 130nm bulk CMOS process at a center frequency of 2.0 GHz. Compared with reported fully integrated 2.0 GHz 4×4 Butler Matrix designs in CMOS, the proposed design achieves the lowest insertion loss of 1.10dB, the smallest amplitude mismatch of 0.3 dB, the largest fractional bandwidth of 34.6%, and the smallest chip core area of 0.635×1.122 mm2. Based on the measured S-parameters, the four concurrent electrical array patterns of the Butler Matrix achieve array peak-to-null ratio (PNR) of 29.5 dB at 2.0 GHz and better than 15.0 dB between 1.55 GHz and 2.50 GHz.
33b04c2ca92aac756b221e96c1d2b4b714cca409
Long-term ECG monitoring is desirable in many daily healthcare situations where a wearable device that can continuously record ECG signals is needed. In this work, we proposed a wearable heart rate belt for ambulant ECG monitoring which can be comfortably worn on the chest or the waist. Active textile electrodes were designed for ECG recording. And a battery powered circuit board was developed consisting of ECG signal conditioning circuits, a 3-axis accelerator for body motion detection, a 12-bit AD converter, a DSP for signal processing and a SD card for data storage. The system also includes a wireless communication module that can transmit heart rate data to a sport watch for displaying. Experiments were carried out which shows that the proposed system is unobtrusive and can be comfortably worn by the user during daily activities. When wearing on the waist, ECG signals with reasonably good quality were acquired in rest and walking situations. The proposed system shows promise for long-term ambulant ECG monitoring.
845111f92b5719197a74d20dd0e050c65d4b8635
The sampling frequency and quantity of time series data collected from water distribution systems has been increasing in recent years, giving rise to the potential for improving system knowledge if suitable automated techniques can be applied, in particular, machine learning. Novelty (or anomaly) detection refers to the automatic identification of novel or abnormal patterns embedded in large amounts of ‘‘normal’’ data. When dealing with time series data (transformed into vectors), this means abnormal events embedded amongst many normal time series points. The support vector machine is a data-driven statistical technique that has been developed as a tool for classification and regression. The key features include statistical robustness with respect to non-Gaussian errors and outliers, the selection of the decision boundary in a principled way, and the introduction of nonlinearity in the feature space without explicitly requiring a nonlinear algorithm by means of kernel functions. In this research, support vector regression is used as a learning method for anomaly detection from water flow and pressure time series data. No use is made of past event histories collected through other information sources. The support vector regression methodology, whose robustness derives from the training error function, is applied
5d9a3036181676e187c9c0ff995d8bed1db3557d
Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to non-image data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.
95ded03f3eb9d60b3e3d51931147d5049be4ba5e
Reconfigurable reflectarray antennas operating in Ku-band are presented in this paper. First, a novel multilayer unit-cell based on polarization turning concept is proposed to achieve the single-bit phase shift required for reconfigurable reflectarray applications. The principle of the unit-cell is discussed using the current model and the space match condition, along with simulations to corroborate the design and performance criteria. Then, an offset-fed configuration is developed to verify performance of the unit-cell in antenna application, and its polarization transformation property is elaborated. Finally, an offset-fed reflectarray with 10 × 10 elements is developed and fabricated. The dual-polarized antenna utilizes the control code matrices to accomplish a wide angle beam-scanning. A full-wave analysis is applied to the reflectarray, and detailed results are presented and discussed. This electronically steerable reflectarray antenna has significant potential for satellite applications, due to its wide operating band, simple control and beam-scanning capability.
318cb91c41307135781a0a01bc9e0b6a6e123b0f
We provide a large dataset containing RGB-D image sequences and the ground-truth camera trajectories with the goal to establish a benchmark for the evaluation of visual SLAM systems. Our dataset contains the color and depth images of a Microsoft Kinect sensor and the groundtruth trajectory of camera poses. The data was recorded at full frame rate (30 Hz) and sensor resolution (640x480). The ground-truth trajectory was obtained from a high-accuracy motion-capture system with eight high-speed tracking cameras (100 Hz). Further, we provide the accelerometer data from the Kinect. Finally, we propose an evaluation criterion for measuring the quality of the estimated camera trajectory of visual SLAM systems.
318ada827c5273a6998cfa84e57801121ce04ddc
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Applying Organizational Routines in understanding organizational change Markus Becker, Nathalie Lazaric, Richard Nelson, Sidney G. Winter
327acefe53c09b40ae15bfac9165b5c8f812d158
In this study, the authors examined the findings and implications of the research on trust in leadership that has been conducted during the past 4 decades. First, the study provides estimates of the primary relationships between trust in leadership and key outcomes, antecedents, and correlates (k = 106). Second, the study explores how specifying the construct with alternative leadership referents (direct leaders vs. organizational leadership) and definitions (types of trust) results in systematically different relationships between trust in leadership and outcomes and antecedents. Direct leaders (e.g., supervisors) appear to be a particularly important referent of trust. Last, a theoretical framework is offered to provide parsimony to the expansive literature and to clarify the different perspectives on the construct of trust in leadership and its operation.
6b8fe767239a34e25e71e99bd8b8a64f8279d7f4
An understanding of culture is important to the study of information technologies in that culture at various levels, including national, organizational, and group, can influence the successful implementation and use of information technology. Culture also plays a role in managerial processes that may directly, or indirectly, influence IT. Culture is a challenging variable to research, in part because of the multiple divergent definitions and measures of culture. Notwithstanding, a wide body of literature has emerged that sheds light on the relationship of IT and culture. This paper sets out to provide a review of this literature in order to lend insights into our understanding of the linkages between IT and culture. We begin by conceptualizing culture and laying the groundwork for a values-based approach to the examination of IT and culture. Using this approach, we then provide a comprehensive review of the organizational and cross-cultural IT literature that conceptually links these two traditionally separate streams of research. From our analysis, we develop six themes of IT-culture research emphasizing culture's impact on IT, IT's impact on culture, and IT culture. Building upon these themes, we then develop a theory of IT, values, and conflict. Based upon the theory, we develop propositions concerning three types of cultural conflict and the results of these conflicts. Ultimately, the theory suggests that the reconciliation of these conflicts results in a reorientation of values. We conclude with the particular research challenges posed in this line of inquiry.
9f720a880fe4c99557c4bdfe0e3595ea60902055
1a8c33f9e51ba01e1cdade7029f96892c7c7087b
Prior work on computing semantic relatedness of words focused on representing their meaning in isolation, effectively disregarding inter-word affinities. We propose a large-scale data mining approach to learning word-word relatedness, where known pairs of related words impose constraints on the learning process. We learn for each word a low-dimensional representation, which strives to maximize the likelihood of a word given the contexts in which it appears. Our method, called CLEAR, is shown to significantly outperform previously published approaches. The proposed method is based on first principles, and is generic enough to exploit diverse types of text corpora, while having the flexibility to impose constraints on the derived word similarities. We also make publicly available a new labeled dataset for evaluating word relatedness algorithms, which we believe to be the largest such dataset to date.
2c90cf37144656775a7f48f70f908f72bdb58ed8
Smart world is envisioned as an era in which objects (e.g., watches, mobile phones, computers, cars, buses, and trains) can automatically and intelligently serve people in a collaborative manner. Paving the way for smart world, Internet of Things (IoT) connects everything in the smart world. Motivated by achieving a sustainable smart world, this paper discusses various technologies and issues regarding green IoT, which further reduces the energy consumption of IoT. Particularly, an overview regarding IoT and green IoT is performed first. Then, the hot green information and communications technologies (ICTs) (e.g., green radio-frequency identification, green wireless sensor network, green cloud computing, green machine to machine, and green data center) enabling green IoT are studied, and general green ICT principles are summarized. Furthermore, the latest developments and future vision about sensor cloud, which is a novel paradigm in green IoT, are reviewed and introduced, respectively. Finally, future research directions and open problems about green IoT are presented. Our work targets to be an enlightening and latest guidance for research with respect to green IoT and smart world.
d7260b8cf64aca3f538080369390490830a1e248
This paper describes the main features of a multilayer antenna panel for a low cost Ka-band Array Antenna for on-the-move applications. The LOCOMO satcom terminal is based on a dual-polarized low-profile antenna with trasmit/receive and RHCP/LHCP switching capability.
ecda3cc93064bb274eecd94d06b47945bb672ca4
We present a design for an electronic-steerable holographic antenna with polarization control composed of a radial array of Ku-band, electronically-steerable, surface-wave waveguide (SWG) artificial-impedance-surface antennas (AISA). The antenna operates by launching surface waves into each of the SWGs via a central feed network. The surface-wave impedance is electronically controlled with varactor-tuned impedance patches. The impedance is adjusted to scan the antenna in elevation, azimuth and polarization. The radial symmetry allows for 360° azimuthal steering. If constructed with previously-demonstrated SWG AISAs, it is capable of scanning in elevation from -75° to 75° with gain variation of less than 3 dB. The polarization can be switched among V-Pol, H-Pol, LHCP and RHCP at will.
31632b27bc8a31b1fbb7867656b8e3ca840376e0
As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.
0d57d7cba347c6b8929a04f7391a25398ded096c
We study the problem of compressing recurrent neural networks (RNNs). In particular, we focus on the compression of RNN acoustic models, which are motivated by the goal of building compact and accurate speech recognition systems which can be run efficiently on mobile devices. In this work, we present a technique for general recurrent model compression that jointly compresses both recurrent and non-recurrent inter-layer weight matrices. We find that the proposed technique allows us to reduce the size of our Long Short-Term Memory (LSTM) acoustic model to a third of its original size with negligible loss in accuracy.
779cbb350c11a5b24a8a17114cff0c26fe3747e6
We present a parser for Abstract Meaning Representation (AMR). We treat Englishto-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser significantly improves upon state-of-the-art results.
40f6207b722c739c04ba5a41f7b22d472aeb08ec
We introduce the first visual dataset of fast foods with a total of 4,545 still images, 606 stereo pairs, 303 360° videos for structure from motion, and 27 privacy-preserving videos of eating events of volunteers. This work was motivated by research on fast food recognition for dietary assessment. The data was collected by obtaining three instances of 101 foods from 11 popular fast food chains, and capturing images and videos in both restaurant conditions and a controlled lab setting. We benchmark the dataset using two standard approaches, color histogram and bag of SIFT features in conjunction with a discriminative classifier. Our dataset and the benchmarks are designed to stimulate research in this area and will be released freely to the research community.
54dd77bd7b904a6a69609c9f3af11b42f654ab5d
62f9c50666152cca170619bab5f2b4da17bc15e1
In this paper, we report the feature obtained from the Deep Convolutional Neural Network boosts food recognition accuracy greatly by integrating it with conventional hand-crafted image features, Fisher Vectors with HoG and Color patches. In the experiments, we have achieved 72.26% as the top-1 accuracy and 92.00% as the top-5 accuracy for the 100-class food dataset, UEC-FOOD100, which outperforms the best classification accuracy of this dataset reported so far, 59.6%, greatly.
46319a2732e38172d17a3a2f0bb218729a76e4ec
In this work, a system for recognizing activities in the home setting using a set of small and simple state-change sensors is introduced. The sensors are designed to be “tape on and forget” devices that can be quickly and ubiquitously installed in home environments. The proposed sensing system presents an alternative to sensors that are sometimes perceived as invasive, such as cameras and microphones. Unlike prior work, the system has been deployed in multiple residential environments with non-researcher occupants. Preliminary results on a small dataset show that it is possible to recognize activities of interest to medical professionals such as toileting, bathing, and grooming with detection accuracies ranging from 25% to 89% depending on the evaluation criteria used .
56cf75f8e34284a9f022e9c49d330d3fc3d18862
Grammatical error correction (GEC) is the task of automatically correcting grammatical errors in written text. Earlier attempts to grammatical error correction involve rule-based and classifier approaches which are limited to correcting only some particular type of errors in a sentence. As sentences may contain multiple errors of different types, a practical error correction system should be able to detect and correct all errors. In this report, we investigate GEC as a translation task from incorrect to correct English and explore some machine translation approaches for developing end-to-end GEC systems for all error types. We apply Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) approaches to GEC and show that they can correct multiple errors of different types in a sentence when compared to the earlier methods which focus on individual errors. We also discuss some of the weakness of machine translation approaches. Finally, we also experiment on a candidate re-ranking technique to re-rank the hypotheses generated by machine translation systems. With regression models, we try to predict a grammaticality score for each candidate hypothesis and re-rank them according to the score.
2e60c997eef6a37a8af87659798817d3eae2aa36
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining distant proposals with high acceptance probabilities in a Metropolis-Hastings framework, enabling more efficient exploration of the state space than standard random-walk proposals. The popularity of such methods has grown significantly in recent years. However, a limitation of HMC methods is the required gradient computation for simulation of the Hamiltonian dynamical system—such computation is infeasible in problems involving a large sample size or streaming data. Instead, we must rely on a noisy gradient estimate computed from a subset of the data. In this paper, we explore the properties of such a stochastic gradient HMC approach. Surprisingly, the natural implementation of the stochastic approximation can be arbitrarily bad. To address this problem we introduce a variant that uses second-order Langevin dynamics with a friction term that counteracts the effects of the noisy gradient, maintaining the desired target distribution as the invariant distribution. Results on simulated data validate our theory. We also provide an application of our methods to a classification task using neural networks and to online Bayesian matrix factorization.
d257ba76407a13bbfddef211a5e3eb00409dc7b6
The growing amount of available information and its distributed and heterogeneous nature has a major impact on the field of data mining. In this paper, we propose a framework for parallel and distributed boosting algorithms intended for efficient integrating specialized classifiers learned over very large, distributed and possibly heterogeneous databases that cannot fit into main computer memory. Boosting is a popular technique for constructing highly accurate classifier ensembles, where the classifiers are trained serially, with the weights on the training instances adaptively set according to the performance of previous classifiers. Our parallel boosting algorithm is designed for tightly coupled shared memory systems with a small number of processors, with an objective of achieving the maximal prediction accuracy in fewer iterations than boosting on a single processor. After all processors learn classifiers in parallel at each boosting round, they are combined according to the confidence of their prediction. Our distributed boosting algorithm is proposed primarily for learning from several disjoint data sites when the data cannot be merged together, although it can also be used for parallel learning where a massive data set is partitioned into several disjoint subsets for a more efficient analysis. At each boosting round, the proposed method combines classifiers from all sites and creates a classifier ensemble on each site. The final classifier is constructed as an ensemble of all classifier ensembles built on disjoint data sets. The new proposed methods applied to several data sets have shown that parallel boosting can achieve the same or even better prediction accuracy considerably faster than the standard sequential boosting. Results from the experiments also indicate that distributed boosting has comparable or slightly improved classification accuracy over standard boosting, while requiring much less memory and computational time since it uses smaller data sets.
6b7f27cff688d5305c65fbd90ae18f3c6190f762
Generative Adversarial Nets (GANs) and Variational Auto-Encoders (VAEs) provide impressive image generations from Gaussian white noise, but the underlying mathematics are not well understood. We compute deep convolutional network generators by inverting a fixed embedding operator. Therefore, they do not require to be optimized with a discriminator or an encoder. The embedding is Lipschitz continuous to deformations so that generators transform linear interpolations between input white noise vectors into deformations between output images. This embedding is computed with a wavelet Scattering transform. Numerical experiments demonstrate that the resulting Scattering generators have similar properties as GANs or VAEs, without learning a discriminative network or an encoder.
44df79541fa068c54cafd50357ab78d626170365
Architectures form the backbone of complete robotic systems. The right choice of architecture can go a long way in facilitating the specification, implementation and validation of robotic systems. Conversely, of course, the wrong choice can make one's life miserable. We present some of the needs of robotic systems, describe some general classes of robot architectures, and discuss how different architectural styles can help in addressing those needs. The paper, like the field itself, is somewhat preliminary, yet it is hoped that it will provide guidance for those who use, or develop, robot architectures.
5426559cc4d668ee105ec0894b77493e91c5c4d3
ceb709d8be647b7fa089a63d0ba9d82b2eede1f4
The substrate integrated waveguide (SIW) allows the construct of some types of planar antennas which cannot be conventionally integrated on the substrate. However, owing to some technology restrictions, designed SIW horn antennas normally work in the frequency above 10 GHz. This paper proposes a 6.8GHz low-profile H-plane horn antenna based on the ridged SIW, which allows the substrates thinner than λ0/10. Far field radiation patterns are reported to reveal the good performance of proposed antenna. Comparisons among the ridged SIW horn antennas with different number of ridges are also made to shown the matching improvements.
a70e0ae7407d6ba9f2bf576dde69a0e109114af0
0f060ec52c0f7ea2dde6b23921a766e7b8bf4822
The scholars in the research domains of innovation and strategic management concerned about the appropriability for about 30 years or more. They focused on appropriability research and constantly evolving. In this paper, we analyze 30 years (1986–2016) literature on appropriability studies from Web of Science Core Collection database. A cited reference clustering map of different periods and terms co-occurrence map have been generated using bibliometric analysis and content analysis. Based on this, we study the evolutionary trajectory, mechanisms and theoretical architecture of appropriability research and explore further research directions. The results indicate that the essence of the appropriability research evolution is the perception changes in opening and sharing, value creation and value growth, and future research is focusing on role of appropriability in the platform governance, generative appropriability and the evolution of the problem-solving mechanisms.
cd108ed4f69b754cf0a5f3eb74d6c1949ea6674d
Inverse problems in image and audio, and super-resolution in particular, can be seen as high-dimensional structured prediction problems, where the goal is to characterize the conditional distribution of a high-resolution output given its lowresolution corrupted observation. When the scaling ratio is small, point estimates achieve impressive performance, but soon they suffer from the regression-to-themean problem, result of their inability to capture the multi-modality of this conditional distribution. Modeling high-dimensional image and audio distributions is a hard task, requiring both the ability to model complex geometrical structures and textured regions. In this paper, we propose to use as conditional model a Gibbs distribution, where its sufficient statistics are given by deep convolutional neural networks. The features computed by the network are stable to local deformation, and have reduced variance when the input is a stationary texture. These properties imply that the resulting sufficient statistics minimize the uncertainty of the target signals given the degraded observations, while being highly informative. The filters of the CNN are initialized by multiscale complex wavelets, and then we propose an algorithm to fine-tune them by estimating the gradient of the conditional log-likelihood, which bears some similarities with Generative Adversarial Networks. We evaluate experimentally the proposed approach in the image superresolution task, but the approach is general and could be used in other challenging ill-posed problems such as audio bandwidth extension.
18b534c7207a1376fa92e87fe0d2cfb358d98c51
We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F 1) is better than that of earlylexicalizedPCFG models, and surprisingly close to the current state-of-theart. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize. In the early 1990s, as probabilistic methods swept NLP, parsing work revived the investigation of probabilistic context-free grammars ( PCFGs) (Booth and Thomson, 1973; Baker, 1979). However, early results on the utility ofPCFGs for parse disambiguation and language modeling were somewhat disappointing. A conviction arose that lexicalizedPCFGs (where head words annotate phrasal nodes) were the key tool for high performancePCFG parsing. This approach was congruent with the great success of word n-gram models in speech recognition, and drew strength from a broader interest in lexicalized grammars, as well as demonstrations that lexical dependencies were a key tool for resolving ambiguities such asPPattachments (Ford et al., 1982; Hindle and Rooth, 1993). In the following decade, great success in terms of parse disambiguation and even language modeling was achieved by various lexicalized PCFG models (Magerman, 1995; Charniak, 1997; Collins, 1999; Charniak, 2000; Charniak, 2001). However, several results have brought into question how large a role lexicalization plays in such parsers. Johnson (1998) showed that the performance of anunlexicalizedPCFGover the Penn treebank could be improved enormously simply by annotating each node by its parent category. The Penn treebank coveringPCFGis a poor tool for parsing because the context-freedom assumptions it embodies are far too strong, and weakening them in this way makes the model much better. More recently, Gildea (2001) discusses how taking the bilexical probabilities out of a good current lexicalized PCFG parser hurts performance hardly at all: by at most 0.5% for test text from the same domain as the training data, and not at all for test text from a different domain. 1 But it is precisely these bilexical dependencies that backed the intuition that lexicalized PCFGs should be very successful, for example in Hindle and Rooth’s demonstration fromPPattachment. We take this as a reflection of the fundamental sparseness of the lexical dependency information available in the Penn Treebank. As a speech person would say, one million words of training data just isn’t enough. Even for topics central to the treebank’s Wall Street Journal text, such as stocks, many very plausible dependencies occur only once, for example stocks stabilized, while many others occur not at all, for example stocks skyrocketed .2 The best-performing lexicalized PCFGs have increasingly made use of subcategorization 3 of the 1There are minor differences, but all the current best-known lexicalized PCFGs employ bothmonolexicalstatistics, which describe the phrasal categories of arguments and adjuncts t hat appear around a head lexical item, and bilexicalstatistics, or dependencies, which describe the likelihood of a head word tak ing as a dependent a phrase headed by a certain other word. 2This observation motivates various classor similaritybased approaches to combating sparseness, and this remains a promising avenue of work, but success in this area has proven somewhat elusive, and, at any rate, current lexicalized PCFGs do simply use exact word matches if available, and interpola te with syntactic category-based estimates when they are not. 3In this paper we use the term subcategorizationin the original general sense of Chomsky (1965), for where a syntactic ca tcategories appearing in the Penn treebank. Charniak (2000) shows the value his parser gains from parentannotation of nodes, suggesting that this information is at least partly complementary to information derivable from lexicalization, and Collins (1999) uses a range of linguistically motivated and carefully hand-engineered subcategorizations to break down wrong context-freedom assumptions of the naive Penn treebank covering PCFG, such as differentiating “baseNPs” from noun phrases with phrasal modifiers, and distinguishing sentences with empty subjects from those where there is an overt subject NP. While he gives incomplete experimental results as to their efficacy, we can assume that these features were incorporated because of beneficial effects on parsing that were complementary to lexicalization. In this paper, we show that the parsing performance that can be achieved by an unlexicalized PCFG is far higher than has previously been demonstrated, and is, indeed, much higher than community wisdom has thought possible. We describe several simple, linguistically motivated annotations which do much to close the gap between a vanilla PCFG and state-of-the-art lexicalized models. Specifically, we construct anunlexicalizedPCFG which outperforms the lexicalized PCFGs of Magerman (1995) and Collins (1996) (though not more recent models, such as Charniak (1997) or Collins (1999)). One benefit of this result is a much-strengthened lower bound on the capacity of an unlexicalized PCFG. To the extent that no such strong baseline has been provided, the community has tended to greatly overestimate the beneficial effect of lexicalization in probabilistic parsing, rather than looking critically at where lexicalized probabilities are both neededto make the right decision and availablein the training data. Secondly, this result affirms the value of linguistic analysis for feature discovery. The result has other uses and advantages: an unlexicalized PCFGis easier to interpret, reason about, and improve than the more complex lexicalized models. The grammar representation is much more compact, no longer requiring large structures that store lexicalized probabilities. The parsing algorithms have lower asymptotic complexity4 and have much smaller grammar egory is divided into several subcategories, for example di viding verb phrases into finite and non-finite verb phrases, rath e than in the modern restricted usage where the term refers onl y to the syntactic argument frames of predicators. 4O(n3) vs. O(n5) for a naive implementation, or vs. O(n4) if using the clever approach of Eisner and Satta (1999). constants. An unlexicalizedPCFG parser is much simpler to build and optimize, including both standard code optimization techniques and the investigation of methods for search space pruning (Caraballo and Charniak, 1998; Charniak et al., 1998). It is not our goal to argue against the use of lexicalized probabilities in high-performance probabilistic parsing. It has been comprehensively demonstrated that lexical dependencies are useful in resolving major classes of sentence ambiguities, and a parser should make use of such information where possible. We focus here on using unlexicalized, tructural context because we feel that this information has been underexploited and underappreciated. We see this investigation as only one part of the foundation for state-of-the-art parsing which employsboth lexical and structural conditioning. 1 Experimental Setup To facilitate comparison with previous work, we trained our models on sections 2–21 of the WSJsection of the Penn treebank. We used the first 20 files (393 sentences) of section 22 as a development set (devset ). This set is small enough that there is noticeable variance in individual results, but it allowed rapid search for good features via continually reparsing the devset in a partially manual hill-climb. All of section 23 was used as a test set for the final model. For each model, input trees were annotated or transformed in some way, as in Johnson (1998). Given a set of transformed trees, we viewed the local trees as grammar rewrite rules in the standard way, and used (unsmoothed) maximum-likelihood estimates for rule probabilities. 5 To parse the grammar, we used a simple array-based Java implementation of a generalizedCKY parser, which, for our final best model, was able to exhaustively parse all sentences in section 23 in 1GB of memory, taking approximately 3 sec for average length sentences. 6 5The tagging probabilities were smoothed to accommodate unknown words. The quantityP(tag|word) was estimated as follows: words were split into one of several categories wordclass, based on capitalization, suffix, digit, and other character features. For each of these categories, we took th e maximum-likelihood estimate of P(tag|wordclass). This distribution was used as a prior against which observed tagging s, if any, were taken, givingP(tag|word) = [c(tag, word) + κ P(tag|wordclass)]/[c(word)+κ]. This was then inverted to give P(word|tag). The quality of this tagging model impacts all numbers; for example the raw treebank grammar’s devset F 1 is 72.62 with it and 72.09 without it. 6The parser is available for download as open source at: http://nlp.stanford.edu/downloads/lex-parser.shtml
0aac231f1f73bfaabb89ec8b7fdd47dcb288e237
We present a novel l1 regularized off-policy convergent TD-learning method (termed RO-TD), which is able to learn sparse representations of value functions with low computational complexity. The algorithmic framework underlying ROTD integrates two key ideas: off-policy convergent gradient TD methods, such as TDC, and a convex-concave saddle-point formulation of non-smooth convex optimization, which enables first-order solvers and feature selection using online convex regularization. A detailed theoretical and experimental analysis of RO-TD is presented. A variety of experiments are presented to illustrate the off-policy convergence, sparse feature selection capability and low computational cost of the RO-TD algorithm.
33fa11ba676f317b73f963ade7226762b4f3f9c2
08f410a5d6b2770e4630e3f90fb6f3e6b5bfc285
In this paper we have presented a brief current state of the Art for Arabic text representation and classification methods. First we describe some algorithms applied to classification on Arabic text. Secondly, we cite all major works when comparing classification algorithms applied on Arabic text, after this, we mention some authors who proposing new classification methods and finally we investigate the impact of preprocessing on Arabic TC.
6ef78fdb3c54a847d665d006cf812d69326e70ed
This paper focuses on the non-inverting buck- boost converter operated as either buck or boost converter. It is shown that a pulse-width modulation (PWM) discontinuity around buck/boost mode transition can result in substantial increases in output voltage ripple. The effect of the PWM nonlinearity is studied using periodic steady state analysis to quantify the worst case ripple voltage in terms of design parameters. Furthermore, a bifurcation analysis shows that the PWM discontinuity leads to quasi- periodic route to chaos, which results in erratic operation around the buck/boost mode transition. The increased ripple is a very significant problem when the converter is used as a power supply for an RF power amplifier, as is the case in WCDMA handsets. An approach is proposed to remove the discontinuity which results in reduced output voltage ripple, at the expense of reduced efficiency, as demonstrated on an experimental prototype.
4f22dc9084ce1b99bf171502174a992e502e32e1
c7bd6ff231f5ca6051ebfe9fac1ecf209868bff6
A new method for estimating knee joint flexion/extension angles from segment acceleration and angular velocity data is described. The approach uses a combination of Kalman filters and biomechanical constraints based on anatomical knowledge. In contrast to many recently published methods, the proposed approach does not make use of the earth's magnetic field and hence is insensitive to the complex field distortions commonly found in modern buildings. The method was validated experimentally by calculating knee angle from measurements taken from two IMUs placed on adjacent body segments. In contrast to many previous studies which have validated their approach during relatively slow activities or over short durations, the performance of the algorithm was evaluated during both walking and running over 5 minute periods. Seven healthy subjects were tested at various speeds from 1 to 5 mile/h. Errors were estimated by comparing the results against data obtained simultaneously from a 10 camera motion tracking system (Qualysis). The average measurement error ranged from 0.7 degrees for slow walking (1 mph) to 3.4 degrees for running (5 mph). The joint constraint used in the IMU analysis was derived from the Qualysis data. Limitations of the method, its clinical application and its possible extension are discussed.
d6f073762c744bff5fe7562936d3aae4c2f7b67d
In recent years, progress in hardware technology has resulted in the possibility of monitoring many events in real time. The volume of incoming data may be so large, that monitoring all individual data might be intractable. Revisiting any particular record can also be impossible in this environment. Therefore, many database schemes, such as aggregation, join, frequent pattern mining, and indexing, become more challenging in this context. This paper surveys the previous efforts to resolve these issues in processing data streams. The emphasis is on specifying and processing sliding window queries, which are supported in many stream processing engines. We also review the related work on stream query processing, including synopsis structures, plan sharing, operator scheduling, load shedding, and disorder control. Category: Ubiquitous computing
cb745fd78fc7613f95bf5bed1fb125d2e7e39708
Building trustless cross-blockchain trading protocols is challenging. Therefore, centralized liquidity providers remain the preferred route to execute transfers across chains — which fundamentally contradicts the purpose of permissionless ledgers to replace trusted intermediaries. Enabling crossblockchain trades could not only enable currently competing blockchain projects to better collaborate, but seems of particular importance to decentralized exchanges as those are currently limited to the trade of digital assets within their respective blockchain ecosystem. In this paper we systematize the notion of cryptocurrencybacked tokens, an approach towards trustless cross-chain communication. We propose XCLAIM, a protocol for issuing, trading, and redeeming e.g. Bitcoin-backed tokens on Ethereum. We provide implementations for three possible protocol versions and evaluate their security and on-chain costs. With XCLAIM, it costs at most USD 1.17 to issue an arbitrary amount of Bitcoinbacked tokens on Ethereum, given current blockchain transaction fees. Our protocol requires no modifications to Bitcoin’s and Ethereum’s consensus rules and is general enough to support other cryptocurrencies.
0be0d781305750b37acb35fa187febd8db67bfcc
We review accuracy estimation methods and compare the two most commonmethods cross validation and bootstrap Recent experimen tal results on arti cial data and theoretical re sults in restricted settings have shown that for selecting a good classi er from a set of classi ers model selection ten fold cross validation may be better than the more expensive leave one out cross validation We report on a large scale experiment over half a million runs of C and a Naive Bayes algorithm to estimate the e ects of di erent parameters on these al gorithms on real world datasets For cross validation we vary the number of folds and whether the folds are strati ed or not for boot strap we vary the number of bootstrap sam ples Our results indicate that for real word datasets similar to ours the best method to use for model selection is ten fold strati ed cross validation even if computation power allows using more folds
2c10a1ee5039c2f145abab6d5cc335d58f161ef0
160285998b31b11788182da282a1dc6f1e1b40f2
Microsoft Research Redmond participated for the first time in TREC this year, focusing on the question answering track. There is a separate report in this volume on the Microsoft Research Cambridge submissions for the filtering and Web tracks (Robertson et al., 2002). We have been exploring data-driven techniques for Web question answering, and modified our system somewhat for participation in TREC QA. We submitted two runs for the main QA track (AskMSR and AskMSR2).
8213dbed4db44e113af3ed17d6dad57471a0c048
0521ffc1c02c6a4898d02b4afcc7da162fc3ded3
A novel ultra-wideband (UWB) microstrip-to-CPS (coplanar stripline) transition has been developed. This transition or balun structure has several attractive advantages such as good impedance transformation, compact size and wide bandwidth. After the parallel-coupled line section between the microstrip line and CPS is investigated under varied traversal dimensions, a wide transmitting band is well achieved with the emergence of two transmission poles. Next, such a single transition circuit is optimally designed to cover the whole UWB band (3.1 GHz to 10.6 GHz). To verify the predicted results in experiment, the two back-to-back transitions with the same 50 Omega microstrip feed lines are fabricated and tested. Measured results exhibit the return loss close to 10.0 dB over a band from 3.5 GHz to 10.0 GHz.
22ee2316b96c41f743082bd9de679104d79c683a
75041575e3a9fa92af93111fb0a93565efed4858
This paper deals with the implementation of mobile measuring station in greenhouse environment navigated using potential field method. The function of a greenhouse is to create the optimal growing conditions for the full life of the plants. Using autonomous measuring systems helps to monitor all the necessary parameters for creating the optimal environment in the greenhouse. The robot equipped with sensors is capable of driving to the end and back along crop rows inside the greenhouse. It introduces a wireless sensor network that was used for the purpose of measuring and controlling the greenhouse application. Continuous advancements in wireless technology and miniaturization have made the deployment of sensor networks to monitor various aspects of the environment increasingly flexible.