query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
17
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
49965651e9c263cd2926842cc103186b
|
PAC-learning in the presence of evasion adversaries
|
[
{
"docid": "19bb054fb4c6398df99a84a382354d59",
"text": "Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We take the principled view of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. We match or outperform heuristic approaches on supervised and reinforcement learning tasks.",
"title": ""
}
] |
[
{
"docid": "2b38ac7d46a1b3555fef49a4e02cac39",
"text": "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.",
"title": ""
},
{
"docid": "538302f10d223613fd756b9b0e70b32b",
"text": "Generative Adversarial Networks (GANs) have been successfully used to synthesize realistically looking images of faces, scenery and even medical images. Unfortunately, they usually require large training datasets, which are often scarce in the medical field, and to the best of our knowledge GANs have been only applied for medical image synthesis at fairly low resolution. However, many state-of-theart machine learning models operate on high resolution data as such data carries indispensable, valuable information. In this work, we try to generate realistically looking high resolution images of skin lesions with GANs, using only a small training dataset of 2000 samples. The nature of the data allows us to do a direct comparison between the image statistics of the generated samples and the real dataset. We both quantitatively and qualitatively compare state-of-the-art GAN architectures such as DCGAN and LAPGAN against a modification of the latter for the task of image generation at a resolution of 256x256px. Our investigation shows that we can approximate the real data distribution with all of the models, but we notice major differences when visually rating sample realism, diversity and artifacts. In a set of use-case experiments on skin lesion classification, we further show that we can successfully tackle the problem of heavy class imbalance with the help of synthesized high resolution melanoma samples.",
"title": ""
},
{
"docid": "7a2525b0f2225167b57d86ab034bb992",
"text": "The goal of this project is to apply multilayer feedforward neural networks to phishing email detection and evaluate the effectiveness of this approach. We design the feature set, process the phishing dataset, and implement the neural network (NN) systems. We then use cross validation to evaluate the performance of NNs with different numbers of hidden units and activation functions. We also compare the performance of NNs with other major machine learning algorithms. From the statistical analysis, we conclude that NNs with an appropriate number of hidden units can achieve satisfactory accuracy even when the training examples are scarce. Moreover, our feature selection is effective in capturing the characteristics of phishing emails, as most machine learning algorithms can yield reasonable results with it.",
"title": ""
},
{
"docid": "930b64774bb10983540c6ccf092a36d9",
"text": "We consider the solution of discounted optimal stopping problems using linear function approximation methods. A Q-learning algorithm for such problems, proposed by Tsitsiklis and Van Roy, is based on the method of temporal differences and stochastic approximation. We propose alternative algorithms, which are based on projected value iteration ideas and least squares. We prove the convergence of some of these algorithms and discuss their properties.",
"title": ""
},
{
"docid": "505137d61a0087e054a2cf09c8addb4b",
"text": "A delay tolerant network (DTN) is a store and forward network where end-to-end connectivity is not assumed and where opportunistic links between nodes are used to transfer data. An emerging application of DTNs are rural area DTNs, which provide Internet connectivity to rural areas in developing regions using conventional transportation mediums, like buses. Potential applications of these rural area DTNs are e-governance, telemedicine and citizen journalism. Therefore, security and privacy are critical for DTNs. Traditional cryptographic techniques based on PKI-certified public keys assume continuous network access, which makes these techniques inapplicable to DTNs. We present the first anonymous communication solution for DTNs and introduce a new anonymous authentication protocol as a part of it. Furthermore, we present a security infrastructure for DTNs to provide efficient secure communication based on identity-based cryptography. We show that our solutions have better performance than existing security infrastructures for DTNs.",
"title": ""
},
{
"docid": "969c83b4880879f1137284f531c9f94a",
"text": "The extant literature on cross-national differences in approaches to corporate social responsibility (CSR) has mostly focused on developed countries. Instead, we offer two interrelated studies into corporate codes of conduct issued by developing country multinational enterprises (DMNEs). First, we analyse code adoption rates and code content through a mixed methods design. Second, we use multilevel analyses to examine country-level drivers of",
"title": ""
},
{
"docid": "c974e6b4031fde2b8e1de3ade33caef4",
"text": "A large literature has considered predictability of the mean or volatility of stock returns but little is known about whether the distribution of stock returns more generally is predictable. We explore this issue in a quantile regression framework and consider whether a range of economic state variables are helpful in predicting different quantiles of stock returns representing left tails, right tails or shoulders of the return distribution. Many variables are found to have an asymmetric effect on the return distribution, affecting lower, central and upper quantiles very differently. Out-of-sample forecasts suggest that upper quantiles of the return distribution can be predicted by means of economic state variables although the center of the return distribution is more difficult to predict. Economic gains from utilizing information in time-varying quantile forecasts are demonstrated through portfolio selection and option trading experiments. ∗We thank Torben Andersen, Tim Bollerslev, Peter Christoffersen as well as seminar participants at HEC, University of Montreal, University of Toronto, Goldman Sachs and CREATES, University of Aarhus, for helpful comments.",
"title": ""
},
{
"docid": "ad2e5f35a200facccc1e121e3e6da436",
"text": "Increasmg performance of CPUs and memorres wrll be squandered lf not matched by a sunrlm peformance ourease m II0 Whde the capactty of Smgle Large Expenstve D&T (SLED) has grown rapuily, the performance rmprovement of SLED has been modest Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic duk technology developed for personal computers, offers an attractive alternattve IO SLED, promtang onprovements of an or&r of mogm&e m pctformance, rehabdlty, power consumption, and scalalnlrty Thu paper rntroducesfivc levels of RAIDS, grvmg rheu relative costlpetfotmance, and compares RAID to an IBM 3380 and a Fupisu Super Eagle 1 Background: Rlsrng CPU and Memory Performance The users of computers are currently enJoymg unprecedented growth m the speed of computers Gordon Bell said that between 1974 and 1984. smgle chip computers improved m performance by 40% per year, about twice the rate of mmlcomputers [Bell 841 In the followmg year B111 Joy predicted an even faster growth [Joy 851 Mamframe and supercomputer manufacturers, havmg &fficulty keeping pace with the rapId growth predicted by “Joy’s Law,” cope by offermg m&processors as theu top-of-the-lme product. But a fast CPU does not a fast system make Gene Amdahl related CPU speed to mam memory s12e usmg this rule [Siewmrek 821 Each CPU mnstrucaon per second requues one byte of moan memory, If computer system costs are not to be dommated by the cost of memory, then Amdahl’s constant suggests that memory chip capacity should grow at the same rate Gordon Moore pr&cted that growth rate over 20 years fransuforslclup = 2y*-1%4 AK predzted by Moore’s Law, RAMs have quadrupled m capacity every twotMoom75110threeyeaFIyers861 Recently the rauo of megabytes of mam memory to MIPS ha9 been defti as ahha [Garcm 841. vvlth Amdahl’s constant meanmg alpha = 1 In parl because of the rapti drop of memory prices, mam memory we.9 have grownfastexthanCPUspeedsandmanymachmesare~ppedtoday~th alphas of 3 or tigha To mamtam the balance of costs m computer systems, secondary storage must match the advances m other parts of the system A key measPemuswn to copy mthout fee all or w of &IS matcnal IS granted pronded that the COP!S zzrc not made or lstnbuted for dwct commernal advantage, the ACM copyright notIce and the tltk of the pubbcatuon and IW da’, appear, and notxe IS @“en that COPYI\"K IS by pemtrs~on of the Association for Computing Machtnery To COPY otherwIse, or to repubbsh, requres B fee and/or spenfic perm~ss~o”",
"title": ""
},
{
"docid": "30e93cb20194b989b26a8689f06b8343",
"text": "We present a robust method for solving the map matching problem exploiting massive GPS trace data. Map matching is the problem of determining the path of a user on a map from a sequence of GPS positions of that user --- what we call a trajectory. Commonly obtained from GPS devices, such trajectory data is often sparse and noisy. As a result, the accuracy of map matching is limited due to ambiguities in the possible routes consistent with trajectory samples. Our approach is based on the observation that many regularity patterns exist among common trajectories of human beings or vehicles as they normally move around. Among all possible connected k-segments on the road network (i.e., consecutive edges along the network whose total length is approximately k units), a typical trajectory collection only utilizes a small fraction. This motivates our data-driven map matching method, which optimizes the projected paths of the input trajectories so that the number of the k-segments being used is minimized. We present a formulation that admits efficient computation via alternating optimization. Furthermore, we have created a benchmark for evaluating the performance of our algorithm and others alike. Experimental results demonstrate that the proposed approach is superior to state-of-art single trajectory map matching techniques. Moreover, we also show that the extracted popular k-segments can be used to process trajectories that are not present in the original trajectory set. This leads to a map matching algorithm that is as efficient as existing single trajectory map matching algorithms, but with much improved map matching accuracy.",
"title": ""
},
{
"docid": "d6c024e6272cc946fb23278d93ee32c6",
"text": "This article presents a method for developing anatomic contour maps that quantitatively display the vestibular apparatus's 3D spatial configurations. Contour maps are useful tools for determining the appropriate position of the drill and the safe depth and orientation of the installed piston in stapedotomy procedures",
"title": ""
},
{
"docid": "649797f21efa24c523361afee80419c5",
"text": "Web search engines typically provide search results without considering user interests or context. We propose a personalized search approach that can easily extend a conventional search engine on the client side. Our mapping framework automatically maps a set of known user interests onto a group of categories in the Open Directory Project (ODP) and takes advantage of manually edited data available in ODP for training text classifiers that correspond to, and therefore categorize and personalize search results according to user interests. In two sets of controlled experiments, we compare our personalized categorization system (PCAT) with a list interface system (LIST) that mimics a typical search engine and with a nonpersonalized categorization system (CAT). In both experiments, we analyze system performances on the basis of the type of task and query length. We find that PCAT is preferable to LIST for information gathering types of tasks and for searches with short queries, and PCAT outperforms CAT in both information gathering and finding types of tasks, and for searches associated with free-form queries. From the subjects' answers to a questionnaire, we find that PCAT is perceived as a system that can find relevant Web pages quicker and easier than LIST and CAT.",
"title": ""
},
{
"docid": "bee25514d15321f4f0bdcf867bb07235",
"text": "We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full FrankWolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate FrankWolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers.",
"title": ""
},
{
"docid": "31305b698f82e902a5829abc2f272d5f",
"text": "It is now recognized that the Consensus problem is a fundamental problem when one has to design and implement reliable asynchronous distributed systems. This chapter is on the Consensus problem. It studies Consensus in two failure models, namely, the Crash/no Recovery model and the Crash/Recovery model. The assumptions related to the detection of failures that are required to solve Consensus in a given model are particularly emphasized.",
"title": ""
},
{
"docid": "28e1c4c2622353fc87d3d8a971b9e874",
"text": "In-memory key/value store (KV-store) is a key building block for many systems like databases and large websites. Two key requirements for such systems are efficiency and availability, which demand a KV-store to continuously handle millions of requests per second. A common approach to availability is using replication, such as primary-backup (PBR), which, however, requires M+1 times memory to tolerate M failures. This renders scarce memory unable to handle useful user jobs.\n This article makes the first case of building highly available in-memory KV-store by integrating erasure coding to achieve memory efficiency, while not notably degrading performance. A main challenge is that an in-memory KV-store has much scattered metadata. A single KV put may cause excessive coding operations and parity updates due to excessive small updates to metadata. Our approach, namely Cocytus, addresses this challenge by using a hybrid scheme that leverages PBR for small-sized and scattered data (e.g., metadata and key), while only applying erasure coding to relatively large data (e.g., value). To mitigate well-known issues like lengthy recovery of erasure coding, Cocytus uses an online recovery scheme by leveraging the replicated metadata information to continuously serve KV requests. To further demonstrate the usefulness of Cocytus, we have built a transaction layer by using Cocytus as a fast and reliable storage layer to store database records and transaction logs. We have integrated the design of Cocytus to Memcached and extend it to support in-memory transactions. Evaluation using YCSB with different KV configurations shows that Cocytus incurs low overhead for latency and throughput, can tolerate node failures with fast online recovery, while saving 33% to 46% memory compared to PBR when tolerating two failures. A further evaluation using the SmallBank OLTP benchmark shows that in-memory transactions can run atop Cocytus with high throughput, low latency, and low abort rate and recover fast from consecutive failures.",
"title": ""
},
{
"docid": "d079bba6c4490bf00eb73541ebba8ace",
"text": "The literature on Design Science (or Design Research) has been mixed on the inclusion, form, and role of theory and theorising in Design Science. Some authors have explicitly excluded theory development and testing from Design Science, leaving them to the Natural and Social/Behavioural Sciences. Others propose including theory development and testing as part of Design Science. Others propose some ideas for the content of IS Design Theories, although more detailed and clear concepts would be helpful. This paper discusses the need and role for theory in Design Science. It further proposes some ideas for standards for the form and level of detail needed for theories in Design Science. Finally it develops a framework of activities for the interaction of Design Science with research in other scientific paradigms.",
"title": ""
},
{
"docid": "e73149799b88f5162ab15620903ba24b",
"text": "The present eyetracking study examined the influenc e of emotions on learning with multimedia. Based on a 2x2 experimental design, par ticipants received experimentally induced emotions (positive vs. neutral) and then le arn d with a multimedia instructional material, which was varied in its design (with vs. without anthropomorphisms) to induce positive emotions and facilitate learning. Learners who were in a positive emotional state before learning had better learning outcomes in com prehension and transfer tests and showed longer fixation durations on the text information o f the learning environment. Although anthropomorphisms in the learning environment did n ot i duce positive emotions, the eyetracking data revealed that learners’ attention was captured by this design element. Hence, learners in a positive emotional state who learned with the learning environment that included anthropomorphisms showed the highest learning outco me and longest fixation on the relevant information of the multimedia instruction. Results indicate an attention arousing effect of expressive anthropomorphisms and the relevance of e m tional states before learning.",
"title": ""
},
{
"docid": "4124c4c838d0c876f527c021a2c58358",
"text": "Early disease detection is a major challenge in agriculture field. Hence proper measures has to be taken to fight bioagressors of crops while minimizing the use of pesticides. The techniques of machine vision are extensively applied to agricultural science, and it has great perspective especially in the plant protection field,which ultimately leads to crops management. Our goal is early detection of bioagressors. The paper describes a software prototype system for pest detection on the infected images of different leaves. Images of the infected leaf are captured by digital camera and processed using image growing, image segmentation techniques to detect infected parts of the particular plants. Then the detected part is been processed for futher feature extraction which gives general idea about pests. This proposes automatic detection and calculating area of infection on leaves of a whitefly (Trialeurodes vaporariorum Westwood) at a mature stage.",
"title": ""
},
{
"docid": "58fbd637f7c044aeb0d55ba015c70f61",
"text": "This paper outlines an innovative software development that utilizes Quality of Service (QoS) and parallel technologies in Cisco Catalyst Switches to increase the analytical performance of a Network Intrusion Detection and Protection System (NIDPS) when deployed in highspeed networks. We have designed a real network to present experiments that use a Snort NIDPS. Our experiments demonstrate the weaknesses of NIDPSes, such as inability to process multiple packets and propensity to drop packets in heavy traffic and high-speed networks without analysing them. We tested Snort’s analysis performance, gauging the number of packets sent, analysed, dropped, filtered, injected, and outstanding. We suggest using QoS configuration technologies in a Cisco Catalyst 3560 Series Switch and parallel Snorts to improve NIDPS performance and to reduce the number of dropped packets. Our results show that our novel configuration improves performance.",
"title": ""
},
{
"docid": "68388b2f67030d85030d5813df2e147d",
"text": "Radio signal propagation modeling plays an important role in designing wireless communication systems. The propagation models are used to calculate the number and position of base stations and predict the radio coverage. Different models have been developed to predict radio propagation behavior for wireless communication systems in different operating environments. In this paper we shall limit our discussion to the latest achievements in radio propagation modeling related to tunnels. The main modeling approaches used for propagation in tunnels are reviewed, namely, numerical methods for solving Maxwell equations, waveguide or modal approach, ray tracing based methods and two-slope path loss modeling. They are discussed in terms of modeling complexity and required information on the environment including tunnel geometry and electric as well as magnetic properties of walls.",
"title": ""
},
{
"docid": "027a5da45d41ce5df40f6b342a9e4485",
"text": "GPipe is a scalable pipeline parallelism library that enables learning of giant deep neural networks. It partitions network layers across accelerators and pipelines execution to achieve high hardware utilization. It leverages recomputation to minimize activation memory usage. For example, using partitions over 8 accelerators, it is able to train networks that are 25× larger, demonstrating its scalability. It also guarantees that the computed gradients remain consistent regardless of the number of partitions. It achieves an almost linear speedup without any changes in the model parameters: when using 4× more accelerators, training the same model is up to 3.5× faster. We train a 557 million parameters AmoebaNet model and achieve a new state-ofthe-art 84.3% top-1 / 97.0% top-5 accuracy on ImageNet 2012 dataset. Finally, we use this learned model to finetune multiple popular image classification datasets and obtain competitive results, including pushing the CIFAR-10 accuracy to 99% and CIFAR-100 accuracy to 91.3%.",
"title": ""
}
] |
scidocsrr
|
14ce8d3f45975148c11e1ea05d01b5c8
|
Learning Policies to Forecast Agent Behavior with Visual Data
|
[
{
"docid": "e49f04ff71d0718eff9a3a6005b2a689",
"text": "Energy-Based Models (EBMs) capture dependencies between v ariables by associating a scalar energy to each configuration of the variab les. Inference consists in clamping the value of observed variables and finding config urations of the remaining variables that minimize the energy. Learning consi sts in finding an energy function in which observed configurations of the variables a re given lower energies than unobserved ones. The EBM approach provides a common the re ical framework for many learning models, including traditional discr minative and generative approaches, as well as graph-transformer networks, co nditi nal random fields, maximum margin Markov networks, and several manifold learn ing methods. Probabilistic models must be properly normalized, which so metimes requires evaluating intractable integrals over the space of all poss ible variable configurations. Since EBMs have no requirement for proper normalizat ion, his problem is naturally circumvented. EBMs can be viewed as a form of non-p robabilistic factor graphs, and they provide considerably more flexibility in th e design of architectures and training criteria than probabilistic approaches .",
"title": ""
}
] |
[
{
"docid": "5c2297cf5892ebf9864850dc1afe9cbf",
"text": "In this paper, we propose a novel technique for generating images in the 3D domain from images with high degree of geometrical transformations. By coalescing two popular concurrent methods that have seen rapid ascension to the machine learning zeitgeist in recent years: GANs (Goodfellow et. al.) and Capsule networks (Sabour, Hinton et. al.) we present: CapsGAN. We show that CapsGAN performs better than or equal to traditional CNN based GANs in generating images with high geometric transformations using rotated MNIST. In the process, we also show the efficacy of using capsules architecture in the GANs domain. Furthermore, we tackle the Gordian Knot in training GANs the performance control and training stability by experimenting with using Wasserstein distance (gradient clipping, penalty) and Spectral Normalization. The experimental findings of this paper should propel the application of capsules and GANs in the still exciting and nascent domain of 3D image generation, and plausibly video (frame) generation.",
"title": ""
},
{
"docid": "dd11a04de8288feba2b339cca80de41c",
"text": "A methodology for the automatic design optimization of analog circuits is presented. A non-fixed topology approach is followed. A symbolic simulator, called ISAAC, generates an analytic AC model for any analog circuit, time-continuous or time-discrete, CMOS or bipolar. ISAAC's expressions can be fully symbolic or mixed numeric-symbolic, exact or simplified. The model is passed to the design optimization program OPTIMAN. For a user selected circuit topology, the independent design variables are automatically extracted and OPTIMAN sizes all elements to satisfy the performance constraints, thereby optimizing a user defined design objective. The optimization algorithm is simulated annealing. Practical examples show that OPTIMAN quickly designs analog circuits, closely meeting the specifications, and that it is a flexible and reliable design and exploration tool.",
"title": ""
},
{
"docid": "a2df6d7e35323f02026b180270dcf205",
"text": "In an early study, a thermal model has been developed, using finite element simulations, to study the temperature field and response in the electron beam additive manufacturing (EBAM) process, with an ability to simulate single pass scanning only. In this study, an investigation was focused on the initial thermal conditions, redesigned to analyze a critical substrate thickness, above which the preheating temperature penetration will not be affected. Extended studies are also conducted on more complex process configurations, such as multi-layer raster scanning, which are close to actual operations, for more accurate representations of the transient thermal phenomenon.",
"title": ""
},
{
"docid": "e2427ff836c8b83a75d8f7074656a025",
"text": "With the rapid growth of smartphone and tablet users, Device-to-Device (D2D) communications have become an attractive solution for enhancing the performance of traditional cellular networks. However, relevant security issues involved in D2D communications have not been addressed yet. In this paper, we investigate the security requirements and challenges for D2D communications, and present a secure and efficient key agreement protocol, which enables two mobile devices to establish a shared secret key for D2D communications without prior knowledge. Our approach is based on the Diffie-Hellman key agreement protocol and commitment schemes. Compared to previous work, our proposed protocol introduces less communication and computation overhead. We present the design details and security analysis of the proposed protocol. We also integrate our proposed protocol into the existing Wi-Fi Direct protocol, and implement it using Android smartphones.",
"title": ""
},
{
"docid": "e591165d8e141970b8263007b076dee1",
"text": "Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record",
"title": ""
},
{
"docid": "77d616dc746e74db02215dcf2fdb6141",
"text": "It is almost a quarter of a century since the launch in 1968 of NASA's Pioneer 9 spacecraft on the first mission into deep-space that relied on coding to enhance communications on the critical downlink channel. [The channel code used was a binary convolutional code that was decoded with sequential decoding--we will have much to say about this code in the sequel.] The success of this channel coding system had repercussions that extended far beyond NASA's space program. It is no exaggeration to say that the Pioneer 9 mission provided communications engineers with the first incontrovertible demonstration of the practical utility of channel coding techniques and thereby paved the way for the successful application of coding to many other channels.",
"title": ""
},
{
"docid": "936d92f1afcab16a9dfe24b73d5f986d",
"text": "Active vision techniques use programmable light sources, such as projectors, whose intensities can be controlled over space and time. We present a broad framework for fast active vision using Digital Light Processing (DLP) projectors. The digital micromirror array (DMD) in a DLP projector is capable of switching mirrors “on” and “off” at high speeds (10/s). An off-the-shelf DLP projector, however, effectively operates at much lower rates (30-60Hz) by emitting smaller intensities that are integrated over time by a sensor (eye or camera) to produce the desired brightness value. Our key idea is to exploit this “temporal dithering” of illumination, as observed by a high-speed camera. The dithering encodes each brightness value uniquely and may be used in conjunction with virtually any active vision technique. We apply our approach to five well-known problems: (a) structured light-based range finding, (b) photometric stereo, (c) illumination de-multiplexing, (d) high frequency preserving motion-blur and (e) separation of direct and global scene components, achieving significant speedups in performance. In all our methods, the projector receives a single image as input whereas the camera acquires a sequence of frames.",
"title": ""
},
{
"docid": "873a24a210aa57fc22895500530df2ba",
"text": "We describe the winning entry to the Amazon Picking Challenge. From the experience of building this system and competing in the Amazon Picking Challenge, we derive several conclusions: 1) We suggest to characterize robotic system building along four key aspects, each of them spanning a spectrum of solutions—modularity vs. integration, generality vs. assumptions, computation vs. embodiment, and planning vs. feedback. 2) To understand which region of each spectrum most adequately addresses which robotic problem, we must explore the full spectrum of possible approaches. To achieve this, our community should agree on key aspects that characterize the solution space of robotic systems. 3) For manipulation problems in unstructured environments, certain regions of each spectrum match the problem most adequately, and should be exploited further. This is supported by the fact that our solution deviated from the majority of the other challenge entries along each of the spectra.",
"title": ""
},
{
"docid": "34baa9b0e77f6ef290ab54889edf293d",
"text": "Concurrent with the enactment of metric conversion legislation by the U. S. Congress in 1975, the Motor and Generator Section of the National Electrical Manufacturer Association (NEMA) voted to proceed with the development of a Guide for the Development of Metric Standards for Motors and Generators, referred to as the \" IMetric Guide\" or \"the Guide.\" The first edition was published in 1978, followed by a second, more extensive, edition in November 1980. A summary of the Metric Guide, is given, including comparison with NEMA and International Electrotechnical Commission (IEC) standards.",
"title": ""
},
{
"docid": "d6e9c09af35c5c661870d456a1dfddb5",
"text": "We present NMT-Keras, a flexible toolkit for training deep learning models, which puts a particular emphasis on thedevelopment of advanced applications of neuralmachine translation systems, such as interactive-predictive translation protocols and long-term adaptation of the translation system via continuous learning. NMT-Keras is based on an extended version of the popular Keras library, and it runs on Theano and Tensorflow. State-of-the-art neural machine translation models are deployed and used following the high-level framework provided by Keras. Given its high modularity and flexibility, it also has been extended to tackle different problems, such as image and video captioning, sentence classification and visual question answering.",
"title": ""
},
{
"docid": "51a859f71bd2ec82188826af18204f02",
"text": "This study examines the accuracy of 54 online dating photographs posted by heterosexual daters. We report data on (a1) online daters’ self-reported accuracy, (b) independent judges’ perceptions of accuracy, and (c) inconsistencies in the profile photograph identified by trained coders. While online daters rated their photos as relatively accurate, independent judges rated approximately 1/3 of the photographs as not accurate. Female photographs were judged as less accurate than male photographs, and were more likely to be older, to be retouched or taken by a professional photographer, and to contain inconsistencies, including changes in hair style and skin quality. The findings are discussed in terms of the tensions experienced by online daters to (a) enhance their physical attractiveness and (b) present a photograph that would not be judged deceptive in subsequent face-to-face meetings. The paper extends the theoretical concept of selective self-presentation to online photographs, and discusses issues of self-deception and social desirability bias.",
"title": ""
},
{
"docid": "46fa91ce587d094441466a7cbe5c5f07",
"text": "Automatic facial expression analysis is an interesting and challenging problem which impacts important applications in many areas such as human-computer interaction and data-driven animation. Deriving effective facial representative features from face images is a vital step towards successful expression recognition. In this paper, we evaluate facial representation based on statistical local features called Local Binary Patterns (LBP) for facial expression recognition. Simulation results illustrate that LBP features are effective and efficient for facial expression recognition. A real-time implementation of the proposed approach is also demonstrated which can recognize expressions accurately at the rate of 4.8 frames per second.",
"title": ""
},
{
"docid": "2c4db5a69fd0d23cfccd927b87ecc795",
"text": "Current paper examines the management accounting practices of Estonian manufacturing companies, exploring the main impacts on them within a contingency theory framework. The methodology comprises an analysis of 62 responses to a postal questionnaire survey carried out among the largest Estonian manufacturing companies. On the one hand, the present research aims to confirm earlier findings related to the ‘contingent factors’ that influence management accounting, on the other, to identify possible new factors, such as, the legal accounting environment and shortage of properly qualified accountants. 1 University of Tartu, Faculty of Economics and Business Administration, Ass. Prof. of Accounting Department, PhD, E-mail: [email protected] 2 University of Tartu, Faculty of Economics and Business Administration, Lecturer of Accounting Department, PhD student, E-mail: [email protected] Acknowledgements: The authors are grateful to prof. Robert Chenhall from Monash University for his assistance and to visiting prof. Gary Cunningham from Stuttgart University of Technology for his constructive comments. The financial support from the Estonian Science Foundation is herein acknowledged with gratitude.",
"title": ""
},
{
"docid": "d815e254478a9503f1063b5595f48e0f",
"text": "•We present an approach to this unpaired image captioning problem by language pivoting. •Our method can effectively capture the characteristics of an image captioner from the pivot language (Chinese) and align it to the target language (English) using another pivot-target (Chinese-English) parallel corpus. •Quantitative comparisons against several baseline approaches demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "4455233571d9c4fca8cfa2a5eb8ef22f",
"text": "This article summarizes the studies of the mechanism of electroacupuncture (EA) in the regulation of the abnormal function of hypothalamic-pituitary-ovarian axis (HPOA) in our laboratory. Clinical observation showed that EA with the effective acupoints could cure some anovulatory patients in a highly effective rate and the experimental results suggested that EA might regulate the dysfunction of HPOA in several ways, which means EA could influence some gene expression of brain, thereby, normalizing secretion of some hormones, such as GnRH, LH and E2. The effects of EA might possess a relative specificity on acupoints.",
"title": ""
},
{
"docid": "bfb5ab3f17045856db6da616f5d82609",
"text": "This study examined cognitive distortions and coping styles as potential mediators for the effects of mindfulness meditation on anxiety, negative affect, positive affect, and hope in college students. Our pre- and postintervention design had four conditions: control, brief meditation focused on attention, brief meditation focused on loving kindness, and longer meditation combining both attentional and loving kindness aspects of mindfulness. Each group met weekly over the course of a semester. Longer combined meditation significantly reduced anxiety and negative affect and increased hope. Changes in cognitive distortions mediated intervention effects for anxiety, negative affect, and hope. Further research is needed to determine differential effects of types of meditation.",
"title": ""
},
{
"docid": "b101ab8f2242e85ccd7948b0b3ffe9b4",
"text": "This paper describes a language-independent model for multi-class sentiment analysis using a simple neural network architecture of five layers (Embedding, Conv1D, GlobalMaxPooling and two Fully-Connected). The advantage of the proposed model is that it does not rely on language-specific features such as ontologies, dictionaries, or morphological or syntactic pre-processing. Equally important, our system does not use pre-trained word2vec embeddings which can be costly to obtain and train for some languages. In this research, we also demonstrate that oversampling can be an effective approach for correcting class imbalance in the data. We evaluate our methods on three publicly available datasets for English, German and Arabic, and the results show that our system’s performance is comparable to, or even better than, the state of the art for these datasets. We make our source-code publicly available.",
"title": ""
},
{
"docid": "13153476fac37dd879c34907f7db5317",
"text": "Lean deveLopment is a product development paradigm with an endto-end focus on creating value for the customer, eliminating waste, optimizing value streams, empowering people, and continuously improving (see Figure 11). Lean thinking has penetrated many industries. It was first used in manufacturing, with clear goals to empower teams, reduce waste, optimize work streams, and above all keep market and customer needs as the primary decision driver.2 This IEEE Software special issue addresses lean software development as opposed to management or manufacturing theories. In that context, we sought to address some key questions: What design principles deliver value, and how are they introduced to best manage change?",
"title": ""
},
{
"docid": "17beea6923e7376369691f18b0ca63e2",
"text": "This paper investigates the effect of avatar realism on embodiment and social interactions in Virtual Reality (VR). We compared abstract avatar representations based on a wooden mannequin with high fidelity avatars generated from photogrammetry 3D scan methods. Both avatar representations were alternately applied to participating users and to the virtual counterpart in dyadic social encounters to examine the impact of avatar realism on self-embodiment and social interaction quality. Users were immersed in a virtual room via a head mounted display (HMD). Their full-body movements were tracked and mapped to respective movements of their avatars. Embodiment was induced by presenting the users' avatars to themselves in a virtual mirror. Afterwards they had to react to a non-verbal behavior of a virtual interaction partner they encountered in the virtual space. Several measures were taken to analyze the effect of the appearance of the users' avatars as well as the effect of the appearance of the others' avatars on the users. The realistic avatars were rated significantly more human-like when used as avatars for the others and evoked a stronger acceptance in terms of virtual body ownership (VBO). There also was some indication of a potential uncanny valley. Additionally, there was an indication that the appearance of the others' avatars impacts the self-perception of the users.",
"title": ""
},
{
"docid": "372ce38b93c2b3234281e2806aa3bc76",
"text": "Sorting a list of input numbers is one of the most fundamental problems in the field of computer science in general and high-throughput database applications in particular. Although literature abounds with various flavors of sorting algorithms, different architectures call for customized implementations to achieve faster sorting times. This paper presents an efficient implementation and detailed analysis of MergeSort on current CPU architectures. Our SIMD implementation with 128-bit SSE is 3.3X faster than the scalar version. In addition, our algorithm performs an efficient multiway merge, and is not constrained by the memory bandwidth. Our multi-threaded, SIMD implementation sorts 64 million floating point numbers in less than 0.5 seconds on a commodity 4-core Intel processor. This measured performance compares favorably with all previously published results. Additionally, the paper demonstrates performance scalability of the proposed sorting algorithm with respect to certain salient architectural features of modern chip multiprocessor (CMP) architectures, including SIMD width and core-count. Based on our analytical models of various architectural configurations, we see excellent scalability of our implementation with SIMD width scaling up to 16X wider than current SSE width of 128-bits, and CMP core-count scaling well beyond 32 cores. Cycle-accurate simulation of Intel’s upcoming x86 many-core Larrabee architecture confirms scalability of our proposed algorithm.",
"title": ""
}
] |
scidocsrr
|
636395e5beeb9a5c851eb65d9630c1ae
|
Preventing Private Information Inference Attacks on Social Networks
|
[
{
"docid": "1aa01ca2f1b7f5ea8ed783219fe83091",
"text": "This paper presents NetKit, a modular toolkit for classifica tion in networked data, and a case-study of its application to a collection of networked data sets use d in prior machine learning research. Networked data are relational data where entities are inter connected, and this paper considers the common case where entities whose labels are to be estimated a re linked to entities for which the label is known. NetKit is based on a three-component framewo rk, comprising a local classifier, a relational classifier, and a collective inference procedur . Various existing relational learning algorithms can be instantiated with appropriate choices for the se three components and new relational learning algorithms can be composed by new combinations of c omponents. The case study demonstrates how the toolkit facilitates comparison of differen t learning methods (which so far has been lacking in machine learning research). It also shows how the modular framework allows analysis of subcomponents, to assess which, whether, and when partic ul components contribute to superior performance. The case study focuses on the simple but im portant special case of univariate network classification, for which the only information avai lable is the structure of class linkage in the network (i.e., only links and some class labels are avail ble). To our knowledge, no work previously has evaluated systematically the power of class-li nkage alone for classification in machine learning benchmark data sets. The results demonstrate clea rly th t simple network-classification models perform remarkably well—well enough that they shoul d be used regularly as baseline classifiers for studies of relational learning for networked dat a. The results also show that there are a small number of component combinations that excel, and that different components are preferable in different situations, for example when few versus many la be s are known.",
"title": ""
}
] |
[
{
"docid": "91c9dcfd3428fb79afd8d99722c95b69",
"text": "In this article we describe results of our research on the disambiguation of user queries using ontologies for categorization. We present an approach to cluster search results by using classes or “Sense Folders” ~prototype categories! derived from the concepts of an assigned ontology, in our case WordNet. Using the semantic relations provided from such a resource, we can assign categories to prior, not annotated documents. The disambiguation of query terms in documents with respect to a user-specific ontology is an important issue in order to improve the retrieval performance for the user. Furthermore, we show that a clustering process can enhance the semantic classification of documents, and we discuss how this clustering process can be further enhanced using only the most descriptive classes of the ontology. © 2006 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "0084d9c69d79a971e7139ab9720dd846",
"text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.",
"title": ""
},
{
"docid": "88602ba9bcb297af04e58ed478664ee5",
"text": "Effective and accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment (MCI)), has attracted more and more attention recently. So far, multiple biomarkers have been shown to be sensitive to the diagnosis of AD and MCI, i.e., structural MR imaging (MRI) for brain atrophy measurement, functional imaging (e.g., FDG-PET) for hypometabolism quantification, and cerebrospinal fluid (CSF) for quantification of specific proteins. However, most existing research focuses on only a single modality of biomarkers for diagnosis of AD and MCI, although recent studies have shown that different biomarkers may provide complementary information for the diagnosis of AD and MCI. In this paper, we propose to combine three modalities of biomarkers, i.e., MRI, FDG-PET, and CSF biomarkers, to discriminate between AD (or MCI) and healthy controls, using a kernel combination method. Specifically, ADNI baseline MRI, FDG-PET, and CSF data from 51AD patients, 99 MCI patients (including 43 MCI converters who had converted to AD within 18 months and 56 MCI non-converters who had not converted to AD within 18 months), and 52 healthy controls are used for development and validation of our proposed multimodal classification method. In particular, for each MR or FDG-PET image, 93 volumetric features are extracted from the 93 regions of interest (ROIs), automatically labeled by an atlas warping algorithm. For CSF biomarkers, their original values are directly used as features. Then, a linear support vector machine (SVM) is adopted to evaluate the classification accuracy, using a 10-fold cross-validation. As a result, for classifying AD from healthy controls, we achieve a classification accuracy of 93.2% (with a sensitivity of 93% and a specificity of 93.3%) when combining all three modalities of biomarkers, and only 86.5% when using even the best individual modality of biomarkers. Similarly, for classifying MCI from healthy controls, we achieve a classification accuracy of 76.4% (with a sensitivity of 81.8% and a specificity of 66%) for our combined method, and only 72% even using the best individual modality of biomarkers. Further analysis on MCI sensitivity of our combined method indicates that 91.5% of MCI converters and 73.4% of MCI non-converters are correctly classified. Moreover, we also evaluate the classification performance when employing a feature selection method to select the most discriminative MR and FDG-PET features. Again, our combined method shows considerably better performance, compared to the case of using an individual modality of biomarkers.",
"title": ""
},
{
"docid": "a01a1bb4c5f6fc027384aa40e495eced",
"text": "Sentiment classification of grammatical constituents can be explained in a quasicompositional way. The classification of a complex constituent is derived via the classification of its component constituents and operations on these that resemble the usual methods of compositional semantic analysis. This claim is illustrated with a description of sentiment propagation, polarity reversal, and polarity conflict resolution within various linguistic constituent types at various grammatical levels. We propose a theoretical composition model, evaluate a lexical dependency parsing post-process implementation, and estimate its impact on general NLP pipelines.",
"title": ""
},
{
"docid": "5cc4b9d01928678d9099548fc31abc94",
"text": "Educational process mining (EPM) is an emerging field in educational data mining (EDM) aiming to make unexpressed knowledge explicit and to facilitate better understanding of the educational process. EPM uses log data gathered specifically from educational environments in order to discover, analyze, and provide a visual representation of the complete educational process. This paper introduces EPM and elaborates on some of the potential of this technology in the educational domain. It also describes some other relevant, related areas such as intentional mining, sequential pattern mining and graph mining. It highlights the components of an EPM framework and it describes the different challenges when handling event logs and other generic issues. It describes the data, tools, techniques and models used in EPM. In addition, the main work in this area is described and grouped by educational application domains. © 2017 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "e8792ced13f1be61d031e2b150cc5cf6",
"text": "Scientific literature cites a wide range of values for caffeine content in food products. The authors suggest the following standard values for the United States: coffee (5 oz) 85 mg for ground roasted coffee, 60 mg for instant and 3 mg for decaffeinated; tea (5 oz): 30 mg for leaf/bag and 20 mg for instant; colas: 18 mg/6 oz serving; cocoa/hot chocolate: 4 mg/5 oz; chocolate milk: 4 mg/6 oz; chocolate candy: 1.5-6.0 mg/oz. Some products from the United Kingdom and Denmark have higher caffeine content. Caffeine consumption survey data are limited. Based on product usage and available consumption data, the authors suggest a mean daily caffeine intake for US consumers of 4 mg/kg. Among children younger than 18 years of age who are consumers of caffeine-containing foods, the mean daily caffeine intake is about 1 mg/kg. Both adults and children in Denmark and UK have higher levels of caffeine intake.",
"title": ""
},
{
"docid": "c5fbbdc6da326b08c734ac1f5daf76d1",
"text": "Sentiment classification in Chinese microblogs is more challenging than that of Twitter for numerous reasons. In this paper, two kinds of approaches are proposed to classify opinionated Chinesemicroblog posts: 1) lexicon-based approaches combining Simple Sentiment Word-Count Method with 3 Chinese sentiment lexicons, 2) machine learning models with multiple features. According to our experiment, lexicon-based approaches can yield relatively fine results and machine learning classifiers outperform both the majority baseline and lexicon-based approaches. Among all the machine learning-based approaches, Random Forests works best and the results are satisfactory.",
"title": ""
},
{
"docid": "6d2449941d27774451edde784d3521fe",
"text": "Convolutional neural networks (CNNs) have recently been applied to the optical flow estimation problem. As training the CNNs requires sufficiently large amounts of labeled data, existing approaches resort to synthetic, unrealistic datasets. On the other hand, unsupervised methods are capable of leveraging real-world videos for training where the ground truth flow fields are not available. These methods, however, rely on the fundamental assumptions of brightness constancy and spatial smoothness priors that do not hold near motion boundaries. In this paper, we propose to exploit unlabeled videos for semi-supervised learning of optical flow with a Generative Adversarial Network. Our key insight is that the adversarial loss can capture the structural patterns of flow warp errors without making explicit assumptions. Extensive experiments on benchmark datasets demonstrate that the proposed semi-supervised algorithm performs favorably against purely supervised and baseline semi-supervised learning schemes.",
"title": ""
},
{
"docid": "6c9f3107fbf14f5bef1b8edae1b9d059",
"text": "Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.",
"title": ""
},
{
"docid": "5295cd5811b6f86e3dbe6154d9ae5659",
"text": "While swarm robotics systems are often claimed to be highly faulttolerant, so far research has limited its attention to safe laboratory settings and has virtually ignored security issues in the presence of Byzantine robotsÐi.e., robots with arbitrarily faulty or malicious behavior. However, in many applications one or more Byzantine robots may suffice to let current swarm coordination mechanisms fail with unpredictable or disastrous outcomes. In this paper, we provide a proof-of-concept for managing security issues in swarm robotics systems via blockchain technology. Our approach uses decentralized programs executed via blockchain technology (blockchain-based smart contracts) to establish secure swarm coordination mechanisms and to identify and exclude Byzantine swarm members. We studied the performance of our blockchain-based approach in a collective decision-making scenario both in the presence and absence of Byzantine robots and compared our results to those obtained with an existing collective decision approach. The results show a clear advantage of the blockchain approach when Byzantine robots are part of the swarm.",
"title": ""
},
{
"docid": "610f1288ffa85573f0c161d65ca5f9d9",
"text": "User authentication depends largely on the concept of passwords. However, users find it difficult to remember alphanumerical passwords over time. When user is required to choose a secure password, they tend to choose an easy, short and insecure password. Graphical password method is proposed as an alternative solution to text-based alphanumerical passwords. The reason of such proposal is that human brain is better in recognizing and memorizing pictures compared to traditional alphanumerical string. Therefore, in this paper, we propose a conceptual framework to better understand the user performance for new high-end graphical password method. Our proposed framework is based on hybrid approach combining different features into one. The user performance experimental analysis pointed out the effectiveness of the proposed framework.",
"title": ""
},
{
"docid": "49af355cfc9e13234a2a3b115f225c1b",
"text": "Tattoos play an important role in many religions. Tattoos have been used for thousands of years as important tools in ritual and tradition. Judaism, Christianity, and Islam have been hostile to the use of tattoos, but many religions, in particular Buddhism and Hinduism, make extensive use of them. This article examines their use as tools for protection and devotion.",
"title": ""
},
{
"docid": "0ce556418f6557d86c59f178a206cd11",
"text": "The efficiency of decision processes which can be divided into two stages has been measured for the whole process as well as for each stage independently by using the conventional data envelopment analysis (DEA) methodology in order to identify the causes of inefficiency. This paper modifies the conventional DEA model by taking into account the series relationship of the two sub-processes within the whole process. Under this framework, the efficiency of the whole process can be decomposed into the product of the efficiencies of the two sub-processes. In addition to this sound mathematical property, the case of Taiwanese non-life insurance companies shows that some unusual results which have appeared in the independent model do not exist in the relational model. In other words, the relational model developed in this paper is more reliable in measuring the efficiencies and consequently is capable of identifying the causes of inefficiency more accurately. Based on the structure of the model, the idea of efficiency decomposition can be extended to systems composed of multiple stages connected in series. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5d0c211333bd484e29c602b4996d1292",
"text": "Humans tend to organize perceived information into hierarchies and structures, a principle that also applies to music. Even musically untrained listeners unconsciously analyze and segment music with regard to various musical aspects, for example, identifying recurrent themes or detecting temporal boundaries between contrasting musical parts. This paper gives an overview of state-of-theart methods for computational music structure analysis, where the general goal is to divide an audio recording into temporal segments corresponding to musical parts and to group these segments into musically meaningful categories. There are many different criteria for segmenting and structuring music audio. In particular, one can identify three conceptually different approaches, which we refer to as repetition-based, novelty-based, and homogeneitybased approaches. Furthermore, one has to account for different musical dimensions such as melody, harmony, rhythm, and timbre. In our state-of-the-art report, we address these different issues in the context of music structure analysis, while discussing and categorizing the most relevant and recent articles in this field.",
"title": ""
},
{
"docid": "5bf761b94840bcab163ae3a321063b8b",
"text": "The simulation method plays an important role in the investigation of the intrabody communication (IBC). Due to the problems of the transfer function and the corresponding parameters, only the simulation of the galvanic coupling IBC along the arm has been achieved at present. In this paper, a method for the mathematical simulation of the galvanic coupling IBC with different signal transmission paths has been introduced. First, a new transfer function of the galvanic coupling IBC was derived with the consideration of the internal resistances of the IBC devices. Second, the determination of the corresponding parameters used in the transfer function was discussed in detail. Finally, both the measurements and the simulations of the galvanic coupling IBC along the different signal transmission paths were carried out. Our investigation shows that the mathematical simulation results coincide with the measurement results over the frequency range from 100 kHz to 5 MHz, which indicates that the proposed method offers the significant advantages in the theoretical analysis and the application of the galvanic coupling IBC.",
"title": ""
},
{
"docid": "e19e6ed491f5f95da5fd3950a5d36217",
"text": "In the consumer credit industry, assessment of default risk is critically important for the financial health of both the lender and the borrower. Methods for predicting risk for an applicant using credit bureau and application data, typically based on logistic regression or survival analysis, are universally employed by credit card companies. Because of the manner in which the predictive models are fit using large historical sets of existing customer data that extend over many years, default trends, anomalies, and other temporal phenomena that result from dynamic economic conditions are not brought to light. We introduce a modification of the proportional hazards survival model that includes a time-dependency mechanism for capturing temporal phenomena, and we develop a maximum likelihood algorithm for fitting the model. Using a very large, real data set, we demonstrate that incorporating the time dependency can provide more accurate risk scoring, as well as important insight into dynamic market effects that can inform and enhance related decision making. Journal of the Operational Research Society (2012) 63, 306–321. doi:10.1057/jors.2011.34 Published online 11 May 2011",
"title": ""
},
{
"docid": "eea5e2eddd2f1c19eed2e4bfd55cbb83",
"text": "This paper presents a rule-based approach for finding out the stems from text in Bengali, a resource-poor language. It starts by introducing the concept of orthographic syllable, the basic orthographic unit of Bengali. Then it discusses the morphological structure of the tokens for different parts of speech, formalizes the inflection rule constructs and formulates a quantitative ranking measure for potential candidate stems of a token. These concepts are applied in the design and implementation of an extensible architecture of a stemmer system for Bengali text. The accuracy of the system is calculated to be ~89% and above.",
"title": ""
},
{
"docid": "578130d8ef9d18041c84ed226af8c84a",
"text": "Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others.\n In this paper we propose fairness measures for ranked outputs. We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets. We then apply our proposed measures to several real datasets, and detect cases of bias. Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracy.\n The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/FairRank.",
"title": ""
},
{
"docid": "4f4a3b9108786c77c1185c749cf3e010",
"text": "Deep neural network (DNN) has emerged as a very important machine learning and pattern recognition technique in the big data era. Targeting to different types of training and inference tasks, the structure of DNN varies with flexible choices of different component layers, such as fully connection layer, convolutional layer, pooling layer and softmax layer. Deviated from other layers that only require simple operations like addition or multiplication, the softmax layer contains expensive exponentiation and division, thereby causing the hardware design of softmax layer suffering from high complexity, long critical path delay and overflow problems. This paper, for the first time, presents efficient hardware architecture of softmax layer in DNN. By utilizing the domain transformation technique and down-scaling approach, the proposed hardware architecture avoids the aforementioned problems. Analysis shows that the proposed hardware architecture achieves reduced hardware complexity and critical path delay.",
"title": ""
},
{
"docid": "97065954a10665dee95977168b9e6c60",
"text": "We describe the current status of Pad++, a zooming graphical interface that we are exploring as an alternative to traditional window and icon-based approaches to interface design. We discuss the motivation for Pad++, describe the implementation, and present prototype applications. In addition, we introduce an informational physics strategy for interface design and briefly compare it with metaphor-based design strategies.",
"title": ""
}
] |
scidocsrr
|
50e449de1faa3af65b198a0fb6353cdd
|
Distinct balance of excitation and inhibition in an interareal feedforward and feedback circuit of mouse visual cortex.
|
[
{
"docid": "1f364472fcf7da9bfc18d9bb8a521693",
"text": "The Cre/lox system is widely used in mice to achieve cell-type-specific gene expression. However, a strong and universally responding system to express genes under Cre control is still lacking. We have generated a set of Cre reporter mice with strong, ubiquitous expression of fluorescent proteins of different spectra. The robust native fluorescence of these reporters enables direct visualization of fine dendritic structures and axonal projections of the labeled neurons, which is useful in mapping neuronal circuitry, imaging and tracking specific cell populations in vivo. Using these reporters and a high-throughput in situ hybridization platform, we are systematically profiling Cre-directed gene expression throughout the mouse brain in several Cre-driver lines, including new Cre lines targeting different cell types in the cortex. Our expression data are displayed in a public online database to help researchers assess the utility of various Cre-driver lines for cell-type-specific genetic manipulation.",
"title": ""
}
] |
[
{
"docid": "dce51c1fed063c9d9776fce998209d25",
"text": "While classical kernel-based learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Lankriet et al. (2004) considered conic combinations of kernel matrices for classification, leading to a convex quadratically constrained quadratic program. We show that it can be rewritten as a semi-infinite linear program that can be efficiently solved by recycling the standard SVM implementations. Moreover, we generalize the formulation and our method to a larger class of problems, including regression and one-class classification. Experimental results show that the proposed algorithm works for hundred thousands of examples or hundreds of kernels to be combined, and helps for automatic model selection, improving the interpretability of the learning result. In a second part we discuss general speed up mechanism for SVMs, especially when used with sparse feature maps as appear for string kernels, allowing us to train a string kernel SVM on a 10 million real-world splice dataset from computational biology. We integrated Multiple Kernel Learning in our Machine Learning toolbox SHOGUN for which the source code is publicly available at http://www.fml.tuebingen.mpg.de/raetsch/projects/shogun.",
"title": ""
},
{
"docid": "85da43096d4ef2dcb3f8f9ae9ea2db35",
"text": "We present an approach that combines automatic features learned by convolutional neural networks (CNN) and handcrafted features computed by the bag-of-visual-words (BOVW) model in order to achieve state-of-the-art results in facial expression recognition. To obtain automatic features, we experiment with multiple CNN architectures, pretrained models and training procedures, e.g. Dense-SparseDense. After fusing the two types of features, we employ a local learning framework to predict the class label for each test image. The local learning framework is based on three steps. First, a k-nearest neighbors model is applied for selecting the nearest training samples for an input test image. Second, a one-versus-all Support Vector Machines (SVM) classifier is trained on the selected training samples. Finally, the SVM classifier is used for predicting the class label only for the test image it was trained for. Although we used local learning in combination with handcrafted features in our previous work, to the best of our knowledge, local learning has never been employed in combination with deep features. The experiments on the 2013 Facial Expression Recognition (FER) Challenge data set and the FER+ data set demonstrate that our approach achieves state-ofthe-art results. With a top accuracy of 75.42% on the FER 2013 data set and 87.76% on the FER+ data set, we surpass all competition by more than 2% on both data sets.",
"title": ""
},
{
"docid": "f09f5d7e0f75d4b0fdbd8c40860c4473",
"text": "Purpose – The purpose of this paper is to examine the diffusion of a popular Korean music video on the video-sharing web site YouTube. It applies a webometric approach in the diffusion of innovations framework to study three elements of diffusion in a Web 2.0 environment: users, user-to-user relationship and user-generated comment. Design/methodology/approach – The webometric approach combines profile analyses, social network analyses, semantic and sentiment analyses. Findings – The results show that male users in the US played a dominant role in the early-stage diffusion. The dominant users represented the innovators and early adopters in the evaluation stage of the diffusion, and they engaged in continuous discussions about the cultural origin of the video and expressed criticisms. Overall, the discussion between users varied according to their gender, age, and cultural background. Specifically, male users were more interactive than female users, and users in countries culturally similar to Korea were more likely to express favourable attitudes toward the video. Originality/value – The study provides a webometric approach to examine the Web 2.0-based social system in the early-stage global diffusion of cultural offerings. This approach connects the diffusion of innovations framework to the new context of Web 2.0-based diffusion.",
"title": ""
},
{
"docid": "c57a689627f1af0bf872e4d0c5953a28",
"text": "Image diffusion plays a fundamental role for the task of image denoising. The recently proposed trainable nonlinear reaction diffusion (TNRD) model defines a simple but very effective framework for image denoising. However, as the TNRD model is a local model, whose diffusion behavior is purely controlled by information of local patches, it is prone to create artifacts in the homogenous regions and over-smooth highly textured regions, especially in the case of strong noise levels. Meanwhile, it is widely known that the non-local self-similarity (NSS) prior stands as an effective image prior for image denoising, which has been widely exploited in many non-local methods. In this work, we are highly motivated to embed the NSS prior into the TNRD model to tackle its weaknesses. In order to preserve the expected property that end-to-end training remains available, we exploit the NSS prior by defining a set of non-local filters, and derive our proposed trainable non-local reaction diffusion (TNLRD) model for image denoising. Together with the local filters and influence functions, the non-local filters are learned by employing loss-specific training. The experimental results show that the trained TNLRD model produces visually plausible recovered images with more textures and less artifacts, compared to its local versions. Moreover, the trained TNLRD model can achieve strongly competitive performance to recent state-of-the-art image denoising methods in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM).",
"title": ""
},
{
"docid": "62a8548527371acb657d9552ab41d699",
"text": "This paper proposes a novel dynamic gait of locomotion for hexapedal robots which enables them to crawl forward, backward, and rotate using a single actuator. The gait exploits the compliance difference between the two sides of the tripods, to generate clockwise or counter clockwise rotation by controlling the acceleration of the robot. The direction of turning depends on the configuration of the legs -tripod left of right- and the direction of the acceleration. Alternating acceleration in successive steps allows for continuous rotation in the desired direction. An analysis of the locomotion is presented as a function of the mechanical properties of the robot and the contact with the surface. A numerical simulation was performed for various conditions of locomotion. The results of the simulation and analysis were compared and found to be in excellent match.",
"title": ""
},
{
"docid": "7531be3af1285a4c1c0b752d1ee45f52",
"text": "Given an undirected graph with weight for each vertex, the maximum weight clique problem is to find the clique of the maximum weight. Östergård proposed a fast exact algorithm for solving this problem. We show his algorithm is not efficient for very dense graphs. We propose an exact algorithm for the problem, which is faster than Östergård’s algorithm in case the graph is dense. We show the efficiency of our algorithm with some experimental results.",
"title": ""
},
{
"docid": "9584d194e05359ef5123c6b3d71e1c75",
"text": "A bloom filter is a randomized data structure for performing approximate membership queries. It is being increasingly used in networking applications ranging from security to routing in peer to peer networks. In order to meet a given false positive rate, the amount of memory required by a bloom filter is a function of the number of elements in the set. We consider the problem of minimizing the memory requirements in cases where the number of elements in the set is not known in advance but the distribution or moment information of the number of elements is known. We show how to exploit such information to minimize the expected amount of memory required for the filter. We also show how this approach can significantly reduce memory requirement when bloom filters are constructed for multiple sets in parallel. We show analytically as well as experiments on synthetic and trace data that our approach leads to one to three orders of magnitude reduction in memory compared to a standard bloom filter.",
"title": ""
},
{
"docid": "3dcf6c5e59d4472c0b0e25c96b992f3e",
"text": "This paper presents the design of Ultra Wideband (UWB) microstrip antenna consisting of a circular monopole patch antenna with 3 block stepped (wing). The antenna design is an improvement from previous research and it is simulated using CST Microwave Studio software. This antenna was designed on Rogers 5880 printed circuit board (PCB) with overall size of 26 × 40 × 0.787 mm3 and dielectric substrate, εr = 2.2. The performance of the designed antenna was analyzed in term of bandwidth, gain, return loss, radiation pattern, and verified through actual measurement of the fabricated antenna. 10 dB return loss bandwidth from 3.37 GHz to 10.44 GHz based on 50 ohm characteristic impedance for the transmission line model was obtained.",
"title": ""
},
{
"docid": "501d6ec6163bc8b93fd728412a3e97f3",
"text": "This short paper describes our ongoing research on Greenhouse a zero-positive machine learning system for time-series anomaly detection.",
"title": ""
},
{
"docid": "bea270701da3f8d47b19dc7976000562",
"text": "Spatial information captured from optical remote sensors on board unmanned aerial vehicles (UAVs) has great potential in the automatic surveillance of electrical power infrastructure. For an automatic vision based power line inspection system, detecting power lines from cluttered background an important and challenging task. In this paper, we propose a knowledge-based power line detection method for a vision based UAV surveillance and inspection system. A PCNN filter is developed to remove background noise from the images prior to the Hough transform being employed to detect straight lines. Finally knowledge based line clustering is applied to refine the detection results. The experiment on real image data captured from a UAV platform demonstrates that the proposed approach is effective.",
"title": ""
},
{
"docid": "152c11ef8449d53072bbdb28432641fa",
"text": "Flexible intelligent electronic devices (IEDs) are highly desirable to support free allocation of function to IED by means of software reconfiguration without any change of hardware. The application of generic hardware platforms and component-based software technology seems to be a good solution. Due to the advent of IEC 61850, generic hardware platforms with a standard communication interface can be used to implement different kinds of functions with high flexibility. The remaining challenge is the unified function model that specifies various software components with appropriate granularity and provides a framework to integrate them efficiently. This paper proposes the function-block (FB)-based function model for flexible IEDs. The standard FBs are established by combining the IEC 61850 model and the IEC 61499 model. The design of a simplified distance protection IED using standard FBs is described and investigated. The testing results of the prototype system in MATLAB/Simulink demonstrate the feasibility and flexibility of FB-based IEDs.",
"title": ""
},
{
"docid": "d580021d1e7cfe44e58dbace3d5c7bee",
"text": "We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents' social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans' responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies.",
"title": ""
},
{
"docid": "6c00347ffa60b09692bbae45a0c01fc1",
"text": "OBJECTIVES:Eosinophilic gastritis (EG), defined by histological criteria as marked eosinophilia in the stomach, is rare, and large studies in children are lacking. We sought to describe the clinical, endoscopic, and histopathological features of EG, assess for any concurrent eosinophilia at other sites of the gastrointestinal (GI) tract, and evaluate response to dietary and pharmacological therapies.METHODS:Pathology files at our medical center were searched for histological eosinophilic gastritis (HEG) with ≥70 gastric eosinophils per high-power field in children from 2005 to 2011. Pathology slides were evaluated for concurrent eosinophilia in the esophagus, duodenum, and colon. Medical records were reviewed for demographic characteristics, symptoms, endoscopic findings, comorbidities, and response to therapy.RESULTS:Thirty children with severe gastric eosinophilia were identified, median age 7.5 years, 14 of whom had both eosinophilia limited to the stomach and clinical symptoms, fulfilling the clinicopathological definition of EG. Symptoms and endoscopic features were highly variable. History of atopy and food allergies was common. A total of 22% had protein-losing enteropathy (PLE). Gastric eosinophilia was limited to the fundus in two patients. Many patients had associated eosinophilic esophagitis (EoE, 43%) and 21% had eosinophilic enteritis. Response to dietary restriction therapy was high (82% clinical response and 78% histological response). Six out of sixteen patients had persistent EoE despite resolution of their gastric eosinophilia; two children with persistent HEG post therapy developed de novo concurrent EoE.CONCLUSIONS:HEG in children can be present in the antrum and/or fundus. Symptoms and endoscopic findings vary, highlighting the importance of biopsies for diagnosis. HEG is associated with PLE, and with eosinophilia elsewhere in the GI tract including the esophagus. The disease is highly responsive to dietary restriction therapies in children, implicating an allergic etiology. Associated EoE is more resistant to therapy.",
"title": ""
},
{
"docid": "f018db7f20245180d74e4eb07b99e8d3",
"text": "Particle filters can become quite inefficient when being applied to a high-dimensional state space since a prohibitively large number of samples may be required to approximate the underlying density functions with desired accuracy. In this paper, by proposing an adaptive Rao-Blackwellized particle filter for tracking in surveillance, we show how to exploit the analytical relationship among state variables to improve the efficiency and accuracy of a regular particle filter. Essentially, the distributions of the linear variables are updated analytically using a Kalman filter which is associated with each particle in a particle filtering framework. Experiments and detailed performance analysis using both simulated data and real video sequences reveal that the proposed method results in more accurate tracking than a regular particle filter",
"title": ""
},
{
"docid": "0627ea85ea93b56aef5ef378026bc2fc",
"text": "This paper presents a resonant inductive coupling wireless power transfer (RIC-WPT) system with a class-DE and class-E rectifier along with its analytical design procedure. By using the class-DE inverter as a transmitter and the class-E rectifier as a receiver, the designed WPT system can achieve a high power-conversion efficiency because of the class-E ZVS/ZDS conditions satisfied in both the inverter and the rectifier. In the simulation results, the system achieved 79.0 % overall efficiency at 5 W (50 Ω) output power, coupling coefficient 0.072, and 1 MHz operating frequency. Additionally, the simulation results showed good agreement with the design specifications, which indicates the validity of the design procedure.",
"title": ""
},
{
"docid": "da698cfca4e5bbc80fbbab5e8f30e22c",
"text": "This paper base on the application of the Internet of things in the logistics industry as the breakthrough point, to investigate the identification technology, network structure, middleware technology support and so on, which is used in the Internet of things, also to analyze the bottleneck of technology that the Internet of things could meet. At last, summarize the Internet of things’ application in the logistics industry with the intelligent port architecture.",
"title": ""
},
{
"docid": "bbea93884f1f0189be1061939783a1c0",
"text": "Severe adolescent female stress urinary incontinence (SAFSUI) can be defined as female adolescents between the ages of 12 and 17 years complaining of involuntary loss of urine multiple times each day during normal activities or sneezing or coughing rather than during sporting activities. An updated review of its likely prevalence, etiology, and management is required. The case of a 15-year-old female adolescent presenting with a 7-year history of SUI resistant to antimuscarinic medications and 18 months of intensive physiotherapy prompted this review. Issues of performing physical and urodynamic assessment at this young age were overcome in order to achieve the diagnosis of urodynamic stress incontinence (USI). Failed use of tampons was followed by the insertion of (retropubic) suburethral synthetic tape (SUST) under assisted local anesthetic into tissues deemed softer than the equivalent for an adult female. Whereas occasional urinary incontinence can occur in between 6 % and 45 % nulliparous adolescents, the prevalence of non‐neurogenic SAFSUI is uncertain but more likely rare. Risk factors for the occurrence of more severe AFSUI include obesity, athletic activities or high-impact training, and lung diseases such as cystic fibrosis (CF). This first reported use of a SUST in a patient with SAFSUI proved safe and completely curative. Artificial urinary sphincters, periurethral injectables and pubovaginal slings have been tried previously in equivalent patients. SAFSUI is a relatively rare but physically and emotionally disabling presentation. Multiple conservative options may fail, necessitating surgical management; SUST can prove safe and effective.",
"title": ""
},
{
"docid": "b1c62a59a8ce3dd57ab2c00f7657cfef",
"text": "We developed a new method for estimation of vigilance level by using both EEG and EMG signals recorded during transition from wakefulness to sleep. Previous studies used only EEG signals for estimating the vigilance levels. In this study, it was aimed to estimate vigilance level by using both EEG and EMG signals for increasing the accuracy of the estimation rate. In our work, EEG and EMG signals were obtained from 30 subjects. In data preparation stage, EEG signals were separated to its subbands using wavelet transform for efficient discrimination, and chin EMG was used to verify and eliminate the movement artifacts. The changes in EEG and EMG were diagnosed while transition from wakefulness to sleep by using developed artificial neural network (ANN). Training and testing data sets consist of the subbanded components of EEG and power density of EMG signals were applied to the ANN for training and testing the system which gives three situations for the vigilance level of the subject: awake, drowsy, and sleep. The accuracy of estimation was about 98–99% while the accuracy of the previous study, which uses only EEG, was 95–96%.",
"title": ""
},
{
"docid": "497e7a0ed663b2c125650e05f81feae3",
"text": "In this paper we present a novel computer vision library called UAVision that provides support for different digital cameras technologies, from image acquisition to camera calibration, and all the necessary software for implementing an artificial vision system for the detection of color-coded objects. The algorithms behind the object detection focus on maintaining a low processing time, thus the library is suited for real-world real-time applications. The library also contains a TCP Communications Module, with broad interest in robotic applications where the robots are performing remotely from a basestation or from an user and there is the need to access the images acquired by the robot, both for processing or debug purposes. Practical results from the implementation of the same software pipeline using different cameras as part of different types of vision systems are presented. The vision system software pipeline that we present is designed to cope with application dependent time constraints. The experimental results show that using the UAVision library it is possible to use digital cameras at frame rates up to 50 frames per second when working with images of size up to 1 megapixel. Moreover, we present experimental results to show the effect of the frame rate in the delay between the perception of the world and the action of an autonomous robot, as well as the use of raw data from the camera sensor and the implications of this in terms of the referred delay.",
"title": ""
}
] |
scidocsrr
|
090383e63402a75b42eb80a6456f6689
|
Semi-supervised learning approach for Indonesian Named Entity Recognition (NER) using co-training algorithm
|
[
{
"docid": "70e6148316bd8915afd8d0908fb5ab0d",
"text": "We consider the problem of using a large unla beled sample to boost performance of a learn ing algorithm when only a small set of labeled examples is available In particular we con sider a problem setting motivated by the task of learning to classify web pages in which the description of each example can be partitioned into two distinct views For example the de scription of a web page can be partitioned into the words occurring on that page and the words occurring in hyperlinks that point to that page We assume that either view of the example would be su cient for learning if we had enough labeled data but our goal is to use both views together to allow inexpensive unlabeled data to augment a much smaller set of labeled ex amples Speci cally the presence of two dis tinct views of each example suggests strategies in which two learning algorithms are trained separately on each view and then each algo rithm s predictions on new unlabeled exam ples are used to enlarge the training set of the other Our goal in this paper is to provide a PAC style analysis for this setting and more broadly a PAC style framework for the general problem of learning from both labeled and un labeled data We also provide empirical results on real web page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice This paper is to appear in the Proceedings of the Conference on Computational Learning Theory This research was supported in part by the DARPA HPKB program under contract F and by NSF National Young Investigator grant CCR INTRODUCTION In many machine learning settings unlabeled examples are signi cantly easier to come by than labeled ones One example of this is web page classi cation Suppose that we want a program to electronically visit some web site and download all the web pages of interest to us such as all the CS faculty member pages or all the course home pages at some university To train such a system to automatically classify web pages one would typically rely on hand labeled web pages These labeled examples are fairly expensive to obtain because they require human e ort In contrast the web has hundreds of millions of unlabeled web pages that can be inexpensively gathered using a web crawler Therefore we would like our learning algorithm to be able to take as much advantage of the unlabeled data as possible This web page learning problem has an interesting feature Each example in this domain can naturally be described using several di erent kinds of information One kind of information about a web page is the text appearing on the document itself A second kind of information is the anchor text attached to hyperlinks pointing to this page from other pages on the web The two problem characteristics mentioned above availability of both labeled and unlabeled data and the availability of two di erent kinds of information about examples suggest the following learning strat egy Using an initial small set of labeled examples nd weak predictors based on each kind of information for instance we might nd that the phrase research inter ests on a web page is a weak indicator that the page is a faculty home page and we might nd that the phrase my advisor on a link is an indicator that the page being pointed to is a faculty page Then attempt to bootstrap from these weak predictors using unlabeled data For instance we could search for pages pointed to with links having the phrase my advisor and use them as probably positive examples to further train a learning algorithm based on the words on the text page and vice versa We call this type of bootstrapping co training and it has a close connection to bootstrapping from incomplete data in the Expectation Maximization setting see for instance The question this raises is is there any reason to believe co training will help Our goal is to address this question by developing a PAC style theoretical framework to better understand the issues involved in this approach We also give some preliminary empirical results on classifying university web pages see Section that are encouraging in this context More broadly the general question of how unlabeled examples can be used to augment labeled data seems a slippery one from the point of view of standard PAC as sumptions We address this issue by proposing a notion of compatibility between a data distribution and a target function Section and discuss how this relates to other approaches to combining labeled and unlabeled data Section",
"title": ""
},
{
"docid": "89aa60cefe11758e539f45c5cba6f48a",
"text": "For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material. Supplements: Click on the \"Resources\" tab to View Downloadable Files:Solutions Power Point Lecture Slides Chapters 1-5, 8-10, 12-13 and 24 Now Available! For additional resourcse visit the author website: http://www.cs.colorado.edu/~martin/slp.html",
"title": ""
}
] |
[
{
"docid": "cb1bfa58eb89539663be0f2b4ea8e64d",
"text": "Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a ‘good’ hierarchical clustering is one that minimizes a particular cost function [21]. He showed that this cost function has certain desirable properties: in order to achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy and when the similarity between data elements is identical, all clusterings achieve the same cost. We take an axiomatic approach to defining ‘good’ objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a ‘natural’ ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem, and design algorithms for this scenario.",
"title": ""
},
{
"docid": "f405c62d932eec05c55855eb13ba804c",
"text": "Multilevel converters have been under research and development for more than three decades and have found successful industrial application. However, this is still a technology under development, and many new contributions and new commercial topologies have been reported in the last few years. The aim of this paper is to group and review these recent contributions, in order to establish the current state of the art and trends of the technology, to provide readers with a comprehensive and insightful review of where multilevel converter technology stands and is heading. This paper first presents a brief overview of well-established multilevel converters strongly oriented to their current state in industrial applications to then center the discussion on the new converters that have made their way into the industry. In addition, new promising topologies are discussed. Recent advances made in modulation and control of multilevel converters are also addressed. A great part of this paper is devoted to show nontraditional applications powered by multilevel converters and how multilevel converters are becoming an enabling technology in many industrial sectors. Finally, some future trends and challenges in the further development of this technology are discussed to motivate future contributions that address open problems and explore new possibilities.",
"title": ""
},
{
"docid": "967df203ea4a9f1ac90bb7f6bb498b6e",
"text": "Traditional quantum error-correcting codes are designed for the depolarizing channel modeled by generalized Pauli errors occurring with equal probability. Amplitude damping channels model, in general, the decay process of a multilevel atom or energy dissipation of a bosonic system with Markovian bath at zero temperature. We discuss quantum error-correcting codes adapted to amplitude damping channels for higher dimensional systems (qudits). For multi-level atoms, we consider a natural kind of decay process, and for bosonic systems, we consider the qudit amplitude damping channel obtained by truncating the Fock basis of the bosonic modes (e.g., the number of photons) to a certain maximum occupation number. We construct families of single-error-correcting quantum codes that can be used for both cases. Our codes have larger code dimensions than the previously known single-error-correcting codes of the same lengths. In addition, we present families of multi-error correcting codes for these two channels, as well as generalizations of our construction technique to error-correcting codes for the qutrit <inline-formula> <tex-math notation=\"LaTeX\">$V$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$\\Lambda $ </tex-math></inline-formula> channels.",
"title": ""
},
{
"docid": "c2659be74498ec68c3eb5509ae11b3c3",
"text": "We focus on modeling human activities comprising multiple actions in a completely unsupervised setting. Our model learns the high-level action co-occurrence and temporal relations between the actions in the activity video. We consider the video as a sequence of short-term action clips, called action-words, and an activity is about a set of action-topics indicating which actions are present in the video. Then we propose a new probabilistic model relating the action-words and the action-topics. It allows us to model long-range action relations that commonly exist in the complex activity, which is challenging to capture in the previous works. We apply our model to unsupervised action segmentation and recognition, and also to a novel application that detects forgotten actions, which we call action patching. For evaluation, we also contribute a new challenging RGB-D activity video dataset recorded by the new Kinect v2, which contains several human daily activities as compositions of multiple actions interacted with different objects. The extensive experiments show the effectiveness of our model.",
"title": ""
},
{
"docid": "771611dc99e22b054b936fce49aea7fc",
"text": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.",
"title": ""
},
{
"docid": "f63866fcb11eae78b5095e8f7d21cf8a",
"text": "H.264/MPEG4-AVC is the latest video coding standard of the ITU-T video coding experts group (VCEG) and the ISO/IEC moving picture experts group (MPEG). H.264/MPEG4-AVC has recently become the most widely accepted video coding standard since the deployment of MPEG2 at the dawn of digital television, and it may soon overtake MPEG2 in common use. It covers all common video applications ranging from mobile services and videoconferencing to IPTV, HDTV, and HD video storage. This article discusses the technology behind the new H.264/MPEG4-AVC standard, focusing on the main distinct features of its core coding technology and its first set of extensions, known as the fidelity range extensions (FRExt). In addition, this article also discusses the current status of adoption and deployment of the new standard in various application areas",
"title": ""
},
{
"docid": "8c6514a40f1c4ef55cb34336be9b968a",
"text": "This survey (N1⁄4 224) found that characteristics collectively known as the Dark Triad (i.e. narcissism, psychopathy and Machiavellianism) were correlated with various dimensions of short-term mating but not long-term mating. The link between the Dark Triad and shortterm mating was stronger for men than for women. The Dark Triad partially mediated the sex difference in short-term mating behaviour. Findings are consistent with a view that the Dark Triad facilitates an exploitative, short-term mating strategy in men. Possible implications, including that Dark Triad traits represent a bundle of individual differences that promote a reproductively adaptive strategy are discussed. Findings are discussed in the broad context of how an evolutionary approach to personality psychology can enhance our understanding of individual differences. Copyright # 2008 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "418de962446199744b4ced735c506d41",
"text": "In this paper, a stereo matching algorithm based on image segments is presented. We propose the hybrid segmentation algorithm that is based on a combination of the Belief Propagation and Mean Shift algorithms with aim to refine the disparity and depth map by using a stereo pair of images. This algorithm utilizes image filtering and modified SAD (Sum of Absolute Differences) stereo matching method. Firstly, a color based segmentation method is applied for segmenting the left image of the input stereo pair (reference image) into regions. The aim of the segmentation is to simplify representation of the image into the form that is easier to analyze and is able to locate objects in images. Secondly, results of the segmentation are used as an input of the local window-based matching method to determine the disparity estimate of each image pixel. The obtained experimental results demonstrate that the final depth map can be obtained by application of segment disparities to the original images. Experimental results with the stereo testing images show that our proposed Hybrid algorithm HSAD gives a good performance.",
"title": ""
},
{
"docid": "79f1473d4eb0c456660543fda3a648f1",
"text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.",
"title": ""
},
{
"docid": "3db8dc56e573488c5085bf5a61ea0d7f",
"text": "This paper proposes new approximate coloring and other related techniques which markedly improve the run time of the branchand-bound algorithm MCR (J. Global Optim., 37, 95–111, 2007), previously shown to be the fastest maximum-clique-finding algorithm for a large number of graphs. The algorithm obtained by introducing these new techniques in MCR is named MCS. It is shown that MCS is successful in reducing the search space quite efficiently with low overhead. Consequently, it is shown by extensive computational experiments that MCS is remarkably faster than MCR and other existing algorithms. It is faster than the other algorithms by an order of magnitude for several graphs. In particular, it is faster than MCR for difficult graphs of very high density and for very large and sparse graphs, even though MCS is not designed for any particular type of graphs. MCS can be faster than MCR by a factor of more than 100,000 for some extremely dense random graphs.",
"title": ""
},
{
"docid": "f34af647319436085ab8e667bab795b0",
"text": "In the transition from industrial to service robotics, robo ts will have to deal with increasingly unpredictable and variable environments. We present a system that is able to recognize objects of a certain class in an image and to identify their parts for potential interactions. The metho d can recognize objects from arbitrary viewpoints and generalizes to instances that have never been observed during training, even if they are partially occluded and appear against cluttered backgrounds. Our approach builds on the Implicit Shape Model of Leibe et al. (2008). We extend it to couple recognition to the provision of meta-data useful for a task and to the case of multiple viewpoints by integrating it with the dense multi-view correspondence finder of Ferrari et al. (2006). Meta-data can be part labels but also depth estimates, information on material types, or any other pixelwise annotation. We present experimental results on wheelchairs, cars, and motorbikes.",
"title": ""
},
{
"docid": "da3201add57485d574c71c6fa95fc28c",
"text": "Two experiments (modeled after J. Deese's 1959 study) revealed remarkable levels of false recall and false recognition in a list learning paradigm. In Experiment 1, subjects studied lists of 12 words (e.g., bed, rest, awake); each list was composed of associates of 1 nonpresented word (e.g., sleep). On immediate free recall tests, the nonpresented associates were recalled 40% of the time and were later recognized with high confidence. In Experiment 2, a false recall rate of 55% was obtained with an expanded set of lists, and on a later recognition test, subjects produced false alarms to these items at a rate comparable to the hit rate. The act of recall enhanced later remembering of both studied and nonstudied material. The results reveal a powerful illusion of memory: People remember events that never happened.",
"title": ""
},
{
"docid": "078578f356cb7946e3956c571bef06ee",
"text": "Background: Dysphagia is common and costly. The ability of patient symptoms to predict objective swallowing dysfunction is uncertain. Purpose: This study aimed to evaluate the ability of the Eating Assessment Tool (EAT-10) to screen for aspiration risk in patients with dysphagia. Methods: Data from individuals with dysphagia undergoing a videofluoroscopic swallow study between January 2012 and July 2013 were abstracted from a clinical database. Data included the EAT-10, Penetration Aspiration Scale (PAS), total pharyngeal transit (TPT) time, and underlying diagnoses. Bivariate linear correlation analysis, sensitivity, specificity, and predictive values were calculated. Results: The mean age of the entire cohort (N = 360) was 64.40 (± 14.75) years. Forty-six percent were female. The mean EAT-10 was 16.08 (± 10.25) for nonaspirators and 23.16 (± 10.88) for aspirators (P < .0001). There was a linear correlation between the total EAT-10 score and the PAS (r = 0.273, P < .001). Sensitivity and specificity of an EAT-10 > 15 in predicting aspiration were 71% and 53%, respectively. Conclusion: Subjective dysphagia symptoms as documented with the EAT-10 can predict aspiration risk. A linear correlation exists between the EAT-10 and aspiration events (PAS) and aspiration risk (TPT time). Persons with an EAT10 > 15 are 2.2 times more likely to aspirate (95% confidence interval, 1.3907-3.6245). The sensitivity of an EAT-10 > 15 is 71%.",
"title": ""
},
{
"docid": "2bb21a94c803c74ad6c286c7a04b8c5b",
"text": "Recently, social media, such as Twitter, has been successfully used as a proxy to gauge the impacts of disasters in real time. However, most previous analyses of social media during disaster response focus on the magnitude and location of social media discussion. In this work, we explore the impact that disasters have on the underlying sentiment of social media streams. During disasters, people may assume negative sentiments discussing lives lost and property damage, other people may assume encouraging responses to inspire and spread hope. Our goal is to explore the underlying trends in positive and negative sentiment with respect to disasters and geographically related sentiment. In this paper, we propose a novel visual analytics framework for sentiment visualization of geo-located Twitter data. The proposed framework consists of two components, sentiment modeling and geographic visualization. In particular, we provide an entropy-based metric to model sentiment contained in social media data. The extracted sentiment is further integrated into a visualization framework to explore the uncertainty of public opinion. We explored Ebola Twitter dataset to show how visual analytics techniques and sentiment modeling can reveal interesting patterns in disaster scenarios.",
"title": ""
},
{
"docid": "ce650daedc7ba277d245a2150062775f",
"text": "Amongst the large number of write-and-throw-away-spreadsheets developed for one-time use there is a rather neglected proportion of spreadsheets that are huge, periodically used, and submitted to regular update-cycles like any conventionally evolving valuable legacy application software. However, due to the very nature of spreadsheets, their evolution is particularly tricky and therefore error-prone. In our strive to develop tools and methodologies to improve spreadsheet quality, we analysed consolidation spreadsheets of an internationally operating company for the errors they contain. The paper presents the results of the field audit, involving 78 spreadsheets with 60,446 non-empty cells. As a by-product, the study performed was also to validate our analysis tools in an industrial context. The evaluated auditing tool offers the auditor a new view on the formula structure of the spreadsheet by grouping similar formulas into equivalence classes. Our auditing approach defines three similarity criteria between formulae, namely copy, logical and structural equivalence. To improve the visualization of large spreadsheets, equivalences and data dependencies are displayed in separated windows that are interlinked with the spreadsheet. The auditing approach helps to find irregularities in the geometrical pattern of similar formulas.",
"title": ""
},
{
"docid": "867bd0c5f0760715bdfdaeea1290c72f",
"text": "In this paper, we propose a real-time lane detection algorithm based on a hyperbola-pair lane boundary model and an improved RANSAC paradigm. Instead of modeling each road boundary separately, we propose a model to describe the road boundary as a pair of parallel hyperbolas on the ground plane. A fuzzy measurement is introduced into the RANSAC paradigm to improve the accuracy and robustness of fitting the points on the boundaries into the model. Our method is able to deal with existence of partial occlusion, other traffic participants and markings, etc. Experiment in many different conditions, including various conditions of illumination, weather and road, demonstrates its high performance and accuracy",
"title": ""
},
{
"docid": "3810c6b33a895730bc57fdc658d3f72e",
"text": "Comics have been shown to be able to tell a story by guiding the viewers gaze patterns through a sequence of images. However, not much research has been done on how comic techniques affect these patterns. We focused this study to investigate the effect that the structure of a comics panels have on the viewers reading patterns, specifically with the time spent reading the comic and the number of times the viewer fixates on a point. We use two versions of a short comic as a stimulus, one version with four long panels and another with sixteen smaller panels. We collected data using the GazePoint eye tracker, focusing on viewing time and number of fixations, and we collected subjective information about the viewers preferences using a questionnaire. We found that no significant effect between panel structure and viewing time or number of fixations, but those viewers slightly tended to prefer the format of four long panels.",
"title": ""
},
{
"docid": "9e3bba7a681a838fb0b32c1e06eaae93",
"text": "This review focuses on the synthesis, protection, functionalization, and application of magnetic nanoparticles, as well as the magnetic properties of nanostructured systems. Substantial progress in the size and shape control of magnetic nanoparticles has been made by developing methods such as co-precipitation, thermal decomposition and/or reduction, micelle synthesis, and hydrothermal synthesis. A major challenge still is protection against corrosion, and therefore suitable protection strategies will be emphasized, for example, surfactant/polymer coating, silica coating and carbon coating of magnetic nanoparticles or embedding them in a matrix/support. Properly protected magnetic nanoparticles can be used as building blocks for the fabrication of various functional systems, and their application in catalysis and biotechnology will be briefly reviewed. Finally, some future trends and perspectives in these research areas will be outlined.",
"title": ""
},
{
"docid": "474134af25f1a5cd95b3bc29b3df8be4",
"text": "The challenge of combatting malware designed to breach air-gap isolation in order to leak data.",
"title": ""
},
{
"docid": "e3f1ad001f0fc8a3944e5b35fd085a42",
"text": "In recent years, training image segmentation networks often needs fine-tuning the model which comes from the initial training upon large-scale classification datasets like ImageNet. Such fine-tuning methods are confronted with three problems: (1) domain gap. (2) mismatch between data size and model size. (3) poor controllability. A more practical solution is to train the segmentation model from scratch, which motivates our Dense In Dense (DID) network. In DID, we put forward an efficient architecture based on DenseNet to further accelerate the information flow inside and outside the dense block. Deep supervision also applies to a progressive upsampling rather than the traditional straightforward upsampling. Our DID Network performs favorably on Camvid dataset, Inria Aerial Image Labeling dataset and Cityscapes by training from scratch with less parameters.",
"title": ""
}
] |
scidocsrr
|
fd302182a0cfdfdb5efdbe8e0d2473c6
|
A Joint Segmentation and Classification Framework for Sentence Level Sentiment Classification
|
[
{
"docid": "6081f8b819133d40522a4698d4212dfc",
"text": "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.",
"title": ""
},
{
"docid": "d5b986cf02b3f9b01e5307467c1faec2",
"text": "Most sentiment analysis approaches use as baseline a support vector machines (SVM) classifier with binary unigram weights. In this paper, we explore whether more sophisticated feature weighting schemes from Information Retrieval can enhance classification accuracy. We show that variants of the classictf.idf scheme adapted to sentiment analysis provide significant increases in accuracy, especially when using a sublinear function for term frequency weights and document frequency smoothing. The techniques are tested on a wide selection of data sets and produce the best accuracy to our knowledge.",
"title": ""
},
{
"docid": "cf3804e332e9bec1120261f9e4f98da8",
"text": "We propose Bilingually-constrained Recursive Auto-encoders (BRAE) to learn semantic phrase embeddings (compact vector representations for phrases), which can distinguish the phrases with different semantic meanings. The BRAE is trained in a way that minimizes the semantic distance of translation equivalents and maximizes the semantic distance of nontranslation pairs simultaneously. After training, the model learns how to embed each phrase semantically in two languages and also learns how to transform semantic embedding space in one language to the other. We evaluate our proposed method on two end-to-end SMT tasks (phrase table pruning and decoding with phrasal semantic similarities) which need to measure semantic similarity between a source phrase and its translation candidates. Extensive experiments show that the BRAE is remarkably effective in these two tasks.",
"title": ""
}
] |
[
{
"docid": "81476f837dd763301ba065ac78c5bb65",
"text": "Background: The ideal lip augmentation technique provides the longest period of efficacy, lowest complication rate, and best aesthetic results. A myriad of techniques have been described for lip augmentation, but the optimal approach has not yet been established. This systematic review with metaregression will focus on the various filling procedures for lip augmentation (FPLA), with the goal of determining the optimal approach. Methods: A systematic search for all English, French, Spanish, German, Italian, Portuguese and Dutch language studies involving FPLA was performed using these databases: Elsevier Science Direct, PubMed, Highwire Press, Springer Standard Collection, SAGE, DOAJ, Sweetswise, Free E-Journals, Ovid Lippincott Williams & Wilkins, Willey Online Library Journals, and Cochrane Plus. The reference section of every study selected through this database search was subsequently examined to identify additional relevant studies. Results: The database search yielded 29 studies. Nine more studies were retrieved from the reference sections of these 29 studies. The level of evidence ratings of these 38 studies were as follows: level Ib, four studies; level IIb, four studies; level IIIb, one study; and level IV, 29 studies. Ten studies were prospective. Conclusions: This systematic review sought to highlight all the quality data currently available regarding FPLA. Because of the considerable diversity of procedures, no definitive comparisons or conclusions were possible. Additional prospective studies and clinical trials are required to more conclusively determine the most appropriate approach for this procedure. Level of evidence: IV. © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fc9eb12afb2c86005ae4f06835feb6cc",
"text": "Peer pressure is a reoccurring phenomenon in criminal or deviant behaviour especially, as it pertains to adolescents. It may begin in early childhood of about 5years and increase through childhood to become more intense in adolescence years. This paper examines how peer pressure is present in adolescents and how it may influence or create the leverage to non-conformity to societal norms and laws. The paper analyses the process and occurrence of peer influence and pressure on individuals and groups within the framework of the social learning and the social control theories. Major features of the peer pressure process are identified as group dynamics, delinquent peer subculture, peer approval of delinquent behaviour and sanctions for non-conformity which include ridicule, mockery, ostracism and even mayhem or assault in some cases. Also, the paper highlights acceptance and rejection as key concepts that determine the sway or gladiation of adolescents to deviant and criminal behaviour. Finally, it concludes that peer pressure exists for conformity and in delinquent subculture, the result is conformity to criminal codes and behaviour. The paper recommends more urgent, serious and offensive grass root approaches by governments and institutions against this growing threat to the continued peace, orderliness and development of society.",
"title": ""
},
{
"docid": "70a9aa97fc51452fb87288c86d0299d6",
"text": "The germline precursor to the ferrochelatase antibody 7G12 was found to bind the polyether jeffamine in addition to its cognate hapten N-methylmesoporphyrin. A comparison of the X-ray crystal structures of the ligand-free germline Fab and its complex with either hapten or jeffamine reveals that the germline antibody undergoes significant conformational changes upon the binding of these two structurally distinct ligands, which lead to increased antibody-ligand complementarity. The five somatic mutations introduced during affinity maturation lead to enhanced binding affinity for hapten and a loss in affinity for jeffamine. Moreover, a comparison of the crystal structures of the germline and affinity-matured antibodies reveals that somatic mutations not only fix the optimal binding site conformation for the hapten, but also introduce interactions that interfere with the binding of non-hapten molecules. The structural plasticity of this germline antibody and the structural effects of the somatic mutations that result in enhanced affinity and specificity for hapten likely represent general mechanisms used by the immune response, and perhaps primitive proteins, to evolve high affinity, selective receptors for so many distinct chemical structures.",
"title": ""
},
{
"docid": "6d589aaae8107bf6b71c0f06f7a49a28",
"text": "1. INTRODUCTION The explosion of digital connectivity, the significant improvements in communication and information technologies and the enforced global competition are revolutionizing the way business is performed and the way organizations compete. A new, complex and rapidly changing economic order has emerged based on disruptive innovation, discontinuities, abrupt and seditious change. In this new landscape, knowledge constitutes the most important factor, while learning, which emerges through cooperation, together with the increased reliability and trust, is the most important process (Lundvall and Johnson, 1994). The competitive survival and ongoing sustenance of an organisation primarily depend on its ability to redefine and adopt continuously goals, purposes and its way of doing things (Malhotra, 2001). These trends suggest that private and public organizations have to reinvent themselves through 'continuous non-linear innovation' in order to sustain themselves and achieve strategic competitive advantage. The extant literature highlights the great potential of ICT tools for operational efficiency, cost reduction, quality of services, convenience, innovation and learning in private and public sectors. However, scholarly investigations have focused primarily on the effects and outcomes of ICTs (Information & Communication Technology) for the private sector. The public sector has been sidelined because it tends to lag behind in the process of technology adoption and business reinvention. Only recently has the public sector come to recognize the potential importance of ICT and e-business models as a means of improving the quality and responsiveness of the services they provide to their citizens, expanding the reach and accessibility of their services and public infrastructure and allowing citizens to experience a faster and more transparent form of access to government services. The initiatives of government agencies and departments to use ICT tools and applications, Internet and mobile devices to support good governance, strengthen existing relationships and build new partnerships within civil society, are known as eGovernment initiatives. As with e-commerce, eGovernment represents the introduction of a great wave of technological innovation as well as government reinvention. It represents a tremendous impetus to move forward in the 21 st century with higher quality, cost effective government services and a better relationship between citizens and government (Fang, 2002). Many government agencies in developed countries have taken progressive steps toward the web and ICT use, adding coherence to all local activities on the Internet, widening local access and skills, opening up interactive services for local debates, and increasing the participation of citizens on promotion and management …",
"title": ""
},
{
"docid": "409baee7edaec587727624192eab93aa",
"text": "It has been widely shown that recognition memory includes two distinct retrieval processes: familiarity and recollection. Many studies have shown that recognition memory can be facilitated when there is a perceptual match between the studied and the tested items. Most event-related potential studies have explored the perceptual match effect on familiarity on the basis of the hypothesis that the specific event-related potential component associated with familiarity is the FN400 (300-500 ms mid-frontal effect). However, it is currently unclear whether the FN400 indexes familiarity or conceptual implicit memory. In addition, on the basis of the findings of a previous study, the so-called perceptual manipulations in previous studies may also involve some conceptual alterations. Therefore, we sought to determine the influence of perceptual manipulation by color changes on recognition memory when the perceptual or the conceptual processes were emphasized. Specifically, different instructions (perceptually or conceptually oriented) were provided to the participants. The results showed that color changes may significantly affect overall recognition memory behaviorally and that congruent items were recognized with a higher accuracy rate than incongruent items in both tasks, but no corresponding neural changes were found. Despite the evident familiarity shown in the two tasks (the behavioral performance of recognition memory was much higher than at the chance level), the FN400 effect was found in conceptually oriented tasks, but not perceptually oriented tasks. It is thus highly interesting that the FN400 effect was not induced, although color manipulation of recognition memory was behaviorally shown, as seen in previous studies. Our findings of the FN400 effect for the conceptual but not perceptual condition support the explanation that the FN400 effect indexes conceptual implicit memory.",
"title": ""
},
{
"docid": "1b6ddffacc50ad0f7e07675cfe12c282",
"text": "Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.",
"title": ""
},
{
"docid": "eced59d8ec159f3127e7d2aeca76da96",
"text": "Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face to face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches, such as handheld devices with composited graphics or see-through head worn displays, is that users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction. We detail our prototype system and a number of interactive experiences. We present an initial user experiment that shows that participants are able to deduce the size and distance of a virtual projected object. A second experiment shows that participants are able to infer which of a number of targets the other user indicates by pointing.",
"title": ""
},
{
"docid": "dae63c2eb42acf7c5aa75948169abbbf",
"text": "This paper introduces a local planner which computes a set of commands, allowing an autonomous vehicle to follow a given trajectory. To do so, the platform relies on a localization system, a map and a cost map which represents the obstacles in the environment. The presented method computes a set of tentative trajectories, using a schema based on a Frenet frame obtained from the global planner. These trajectories are then scored using a linear combination of weighted cost functions. In the presented approach, new weights are introduced in order to satisfy the specificities of our autonomous platform, Verdino. A study on the influence of the defined weights in the final behavior of the vehicle is introduced. From these tests, several configurations have been chosen and ranked according to two different proposed behaviors. The method has been tested both in simulation and in real conditions.",
"title": ""
},
{
"docid": "13f24b04e37c9e965d85d92e2c588c9a",
"text": "In this paper we propose a new user purchase preference model based on their implicit feedback behavior. We analyze user behavior data to seek their purchase preference signals. We find that if a user has more purchase preference on a certain item he would tend to browse it for more times. It gives us an important inspiration that, not only purchasing behavior but also other types of implicit feedback like browsing behavior, can indicate user purchase preference. We further find that user purchase preference signals also exist in the browsing behavior of item categories. Therefore, when we want to predict user purchase preference for certain items, we can integrate these behavior types into our user preference model by converting such preference signals into numerical values. We evaluate our model on a real-world dataset from a shopping site in China. Results further validate that user purchase preference model in our paper can capture more and accurate user purchase preference information from implicit feedback and greatly improves the performance of user purchase prediction.",
"title": ""
},
{
"docid": "2b42cf158d38153463514ed7bc00e25f",
"text": "The Disney Corporation made their first princess film in 1937 and has continued producing these movies. Over the years, Disney has received criticism for their gender interpretations and lack of racial diversity. This study will examine princess films from the 1990’s and 2000’s and decide whether race or time has an effect on the gender role portrayal of each character. By using a content analysis, this study identified the changes with each princess. The findings do suggest the princess characters exhibited more egalitarian behaviors over time. 1 The Disney Princess franchise began in 1937 with Snow White and the Seven Dwarfs and continues with the most recent film was Tangled (Rapunzel) in 2011. In past years, Disney film makers were criticized by the public audience for lack of ethnic diversity. In 1995, Disney introduced Pocahontas and three years later Mulan emerged creating racial diversity to the collection. Eleven years later, Disney released The Princess and the Frog (2009). The ongoing question is whether diverse princesses maintain the same qualities as their European counterparts. Walt Disney’s legacy lives on, but viewers are still curious about the all white princess collection which did not gain racial counterparts until 58 years later. It is important to recognize the role the Disney Corporation plays in today’s society. The company has several princesses’ films with matching merchandise. Parents purchase the items for their children and through film and merchandise, children are receiving messages such as how a woman ought to act, think or dress. Gender construction in Disney princess films remains important because of the messages it sends to children. We need to know whether gender roles presented in the films downplay the intellect of a woman in a modern society or whether Disney princesses are constricted to the female gender roles such as submissiveness and nurturing. In addition, we need to consider whether the messages are different for diverse princesses. The purpose of the study is to investigate the changes in gender construction in Disney princess characters related to the race of the character. This research also examines how gender construction of Disney princess characters changed from the 1900’s to 2000’s. A comparative content analysis will analyze gender role differences between women of color and white princesses. In particular, the study will ask whether race does matter in the gender roles revealed among each female character. By using social construction perspectives, Disney princesses of color were more masculine, but the most recent films became more egalitarian. 2 LITERATURE REVIEW Women in Disney film Davis (2006) examined women in Disney animated films by creating three categories: The Classic Years, The Middle Era, and The Eisner Era. The Classic Years, 19371967 were described as the beginning of Disney. During this period, women were rarely featured alone in films, but held central roles in the mid-1930s (Davis 2006:84). Three princess films were released and the characters carried out traditional feminine roles such as domestic work and passivity. Davis (2006) argued the princesses during The Classic Era were the least active and dynamic. The Middle Era, 1967-1988, led to a downward spiral for the company after the deaths of Walt and Roy Disney. The company faced increased amounts of debt and only eight Disney films were produced. The representation of women remained largely static (Davis 2006:137). The Eisner Era, 1989-2005, represented a revitalization of Disney with the release of 12 films with leading female roles. Based on the eras, Davis argued there was a shift after Walt Disney’s death which allowed more women in leading roles and released them from traditional gender roles. Independence was a new theme in this era allowing women to be selfsufficient unlike women in The Classic Era who relied on male heroines. Gender Role Portrayal in films England, Descartes, and Meek (2011) examined the Disney princess films and challenged the ideal of traditional gender roles among the prince and princess characters. The study consisted of all nine princess films divided into three categories based on their debut: early, middle and most current. The researchers tested three hypotheses: 1) gender roles among males and female characters would differ, 2) males would rescue or attempt to rescue the princess, and 3) characters would display more egalitarian behaviors over time (England, et al. 2011:557-58). The researchers coded traits as masculine and feminine. They concluded that princesses 3 displayed a mixture of masculine and feminine characteristics. These behaviors implied women are androgynous beings. For example, princesses portrayed bravery almost twice as much as princes (England, et al. 2011). The findings also showed males rescued women more and that women were rarely shown as rescuers. Overall, the data indicated Disney princess films had changed over time as women exhibited more masculine behaviors in more recent films. Choueiti, Granados, Pieper, and Smith (2010) conducted a content analysis regarding gender roles in top grossing Grated films. The researchers considered the following questions: 1) What is the male to female ratio? 2) Is gender related to the presentation of the character demographics such as role, type, or age? and 3) Is gender related to the presentation of character’s likeability, and the equal distribution of male and females from 1990-2005(Choueiti et al. 2010:776-77). The researchers concluded that there were more male characters suggesting the films were patriarchal. However, there was no correlation with demographics of the character and males being viewed as more likeable. Lastly, female representation has slightly decreased from 214 characters or 30.1% in 1990-94 to 281 characters or 29.4% in 2000-2004 (Choueiti et al. 2010:783). From examining gender role portrayals, females have become androgynous while maintaining minimal roles in animated film.",
"title": ""
},
{
"docid": "2fbc75f848a0a3ae8228b5c6cbe76ec4",
"text": "The authors summarize 35 years of empirical research on goal-setting theory. They describe the core findings of the theory, the mechanisms by which goals operate, moderators of goal effects, the relation of goals and satisfaction, and the role of goals as mediators of incentives. The external validity and practical significance of goal-setting theory are explained, and new directions in goal-setting research are discussed. The relationships of goal setting to other theories are described as are the theory's limitations.",
"title": ""
},
{
"docid": "9780c2d63739b8bf4f5c48f12014f605",
"text": "It has been hypothesized that unexplained infertility may be related to specific personality and coping styles. We studied two groups of women with explained infertility (EIF, n = 63) and unexplained infertility (UIF, n = 42) undergoing an in vitro fertilization (IVF) cycle. Women completed personality and coping style questionnaires prior to the onset of the cycle, and state depression and anxiety scales before and at two additional time points during the cycle. Almost no in-between group differences were found at any of the measured time points in regards to the Minnesota Multiphasic Personality Inventory-2 validity and clinical scales, Illness Cognitions and Life Orientation Test, or for the situational measures. The few differences found suggest a more adaptive, better coping, and functioning defensive system in women with EIF. In conclusion, we did not find any clinically significant personality differences or differences in depression or anxiety levels between women with EIF and UIF during an IVF cycle. Minor differences found are probably a reaction to the ambiguous medical situation with its uncertain prognosis, amplifying certain traits which are not specific to one psychological structure but rather to the common experience shared by the group. The results of this study do not support the possibility that personality traits are involved in the pathophysiology of unexplained infertility.",
"title": ""
},
{
"docid": "c25d877f23f874a5ced7548998ec8157",
"text": "The paper presents a Neural Network model for modeling academic profile of students. The proposed model allows prediction of students’ academic performance based on some of their qualitative observations. Classifying and predicting students’ academic performance using arithmetical and statistical techniques may not necessarily offer the best way to evaluate human acquisition of knowledge and skills, but a hybridized fuzzy neural network model successfully handles reasoning with imprecise information, and enables representation of student modeling in the linguistic form the same way the human teachers do. The model is designed, developed and tested in MATLAB and JAVA which considers factors like age, gender, education, past performance, work status, study environment etc. for performance prediction of students. A Fuzzy Probabilistic Neural Network model has been proposed which enables the design of an easy-to-use, personalized student performance prediction component. The results of experiments show that the model outperforms traditional back-propagation neural networks as well as statistical models. It is also found to be a useful tool in predicting the performance of students belonging to any stream. The model may provide dual advantage to the educational institutions; first by helping teachers to amend their teaching methodology based on the level of students thereby improving students’ performances and secondly classifying the likely successful and unsuccessful students.",
"title": ""
},
{
"docid": "02750b69e72daf7f82cb57e1f7f228bf",
"text": "An advanced, simple to use, detrending method to be used before heart rate variability analysis (HRV) is presented. The method is based on smoothness priors approach and operates like a time-varying finite-impulse response high-pass filter. The effect of the detrending on time- and frequency-domain analysis of HRV is studied.",
"title": ""
},
{
"docid": "93a283324fed31e4ecf81d62acae583a",
"text": "The success of the state-of-the-art deblurring methods mainly depends on the restoration of sharp edges in a coarse-to-fine kernel estimation process. In this paper, we propose to learn a deep convolutional neural network for extracting sharp edges from blurred images. Motivated by the success of the existing filtering-based deblurring methods, the proposed model consists of two stages: suppressing extraneous details and enhancing sharp edges. We show that the two-stage model simplifies the learning process and effectively restores sharp edges. Facilitated by the learned sharp edges, the proposed deblurring algorithm does not require any coarse-to-fine strategy or edge selection, thereby significantly simplifying kernel estimation and reducing computation load. Extensive experimental results on challenging blurry images demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of visual quality and run-time.",
"title": ""
},
{
"docid": "c688d24fd8362a16a19f830260386775",
"text": "We present a fast iterative algorithm for identifying the Support Vectors of a given set of points. Our algorithm works by maintaining a candidate Support Vector set. It uses a greedy approach to pick points for inclusion in the candidate set. When the addition of a point to the candidate set is blocked because of other points already present in the set we use a backtracking approach to prune away such points. To speed up convergence we initialize our algorithm with the nearest pair of points from opposite classes. We then use an optimization based approach to increment or prune the candidate Support Vector set. The algorithm makes repeated passes over the data to satisfy the KKT constraints. The memory requirements of our algorithm scale as O(|S|) in the average case, where|S| is the size of the Support Vector set. We show that the algorithm is extremely competitive as compared to other conventional iterative algorithms like SMO and the NPA. We present results on a variety of real life datasets to validate our claims.",
"title": ""
},
{
"docid": "65fe1d49a386f62d467b2796a270510c",
"text": "The connection between human resources and performance in firms in the private sector is well documented. What is less clear is whether the move towards managerialism that has taken place within the Australian public sector during the last twenty years has brought with it some of the features of the relationships between Human Resource Management (HRM) and performance experienced within the private sector. The research begins with a review of the literature. In particular the conceptual thinking surrounding the connection between HRM and performance within private sector organisations is explored. Issues of concern are the direction of the relationship between HRM and performance and definitional questions as to the nature and level of HRM to be investigated and the measurement of performance. These conceptual issues are also debated within the context of a public sector and particularly the Australian environment. An outcome of this task is the specification of a set of appropriate parameters for a study of these linkages within Australian public sector organizations. Short Description The paper discusses the significance of strategic human resource management in relation to performance.",
"title": ""
},
{
"docid": "b77c65cf9fe637fc88752f6776a21e36",
"text": "This paper studies computer security from first principles. The basic questions \"Why?\", \"How do we know what we know?\" and \"What are the implications of what we believe?\"",
"title": ""
},
{
"docid": "8305594d16f0565e3a62cbb69821c485",
"text": "MOTIVATION\nAccurately predicting protein secondary structure and relative solvent accessibility is important for the study of protein evolution, structure and function and as a component of protein 3D structure prediction pipelines. Most predictors use a combination of machine learning and profiles, and thus must be retrained and assessed periodically as the number of available protein sequences and structures continues to grow.\n\n\nRESULTS\nWe present newly trained modular versions of the SSpro and ACCpro predictors of secondary structure and relative solvent accessibility together with their multi-class variants SSpro8 and ACCpro20. We introduce a sharp distinction between the use of sequence similarity alone, typically in the form of sequence profiles at the input level, and the additional use of sequence-based structural similarity, which uses similarity to sequences in the Protein Data Bank to infer annotations at the output level, and study their relative contributions to modern predictors. Using sequence similarity alone, SSpro's accuracy is between 79 and 80% (79% for ACCpro) and no other predictor seems to exceed 82%. However, when sequence-based structural similarity is added, the accuracy of SSpro rises to 92.9% (90% for ACCpro). Thus, by combining both approaches, these problems appear now to be essentially solved, as an accuracy of 100% cannot be expected for several well-known reasons. These results point also to several open technical challenges, including (i) achieving on the order of ≥ 80% accuracy, without using any similarity with known proteins and (ii) achieving on the order of ≥ 85% accuracy, using sequence similarity alone.\n\n\nAVAILABILITY AND IMPLEMENTATION\nSSpro, SSpro8, ACCpro and ACCpro20 programs, data and web servers are available through the SCRATCH suite of protein structure predictors at http://scratch.proteomics.ics.uci.edu.",
"title": ""
},
{
"docid": "3eb50289c3b28d2ce88052199d40bf8d",
"text": "Transportation Problem is an important aspect which has been widely studied in Operations Research domain. It has been studied to simulate different real life problems. In particular, application of this Problem in NPHard Problems has a remarkable significance. In this Paper, we present a comparative study of Transportation Problem through Probabilistic and Fuzzy Uncertainties. Fuzzy Logic is a computational paradigm that generalizes classical two-valued logic for reasoning under uncertainty. In order to achieve this, the notation of membership in a set needs to become a matter of degree. By doing this we accomplish two things viz., (i) ease of describing human knowledge involving vague concepts and (ii) enhanced ability to develop cost-effective solution to real-world problem. The multi-valued nature of Fuzzy Sets allows handling uncertain and vague information. It is a model-less approach and a clever disguise of Probability Theory. We give comparative simulation results of both approaches and discuss the Computational Complexity. To the best of our knowledge, this is the first work on comparative study of Transportation Problem using Probabilistic and Fuzzy Uncertainties.",
"title": ""
}
] |
scidocsrr
|
46ebfa26fb7981c876cf3c7a2cfae58d
|
Understanding Information
|
[
{
"docid": "aa32bff910ce6c7b438dc709b28eefe3",
"text": "Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for a e-mail: [email protected] 482 The European Physical Journal Special Topics urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science",
"title": ""
}
] |
[
{
"docid": "e59136e0d0a710643a078b58075bd8cd",
"text": "PURPOSE\nEpidemiological evidence suggests that chronic consumption of fruit-based flavonoids is associated with cognitive benefits; however, the acute effects of flavonoid-rich (FR) drinks on cognitive function in the immediate postprandial period require examination. The objective was to investigate whether consumption of FR orange juice is associated with acute cognitive benefits over 6 h in healthy middle-aged adults.\n\n\nMETHODS\nMales aged 30-65 consumed a 240-ml FR orange juice (272 mg) and a calorie-matched placebo in a randomized, double-blind, counterbalanced order on 2 days separated by a 2-week washout. Cognitive function and subjective mood were assessed at baseline (prior to drink consumption) and 2 and 6 h post consumption. The cognitive battery included eight individual cognitive tests. A standardized breakfast was consumed prior to the baseline measures, and a standardized lunch was consumed 3 h post-drink consumption.\n\n\nRESULTS\nChange from baseline analysis revealed that performance on tests of executive function and psychomotor speed was significantly better following the FR drink compared to the placebo. The effects of objective cognitive function were supported by significant benefits for subjective alertness following the FR drink relative to the placebo.\n\n\nCONCLUSIONS\nThese data demonstrate that consumption of FR orange juice can acutely enhance objective and subjective cognition over the course of 6 h in healthy middle-aged adults.",
"title": ""
},
{
"docid": "2690f802022b273d41b3131aa982b91b",
"text": "Deep neural networks are demonstrating excellent performance on several classical vision problems. However, these networks are vulnerable to adversarial examples, minutely modified images that induce arbitrary attacker-chosen output from the network. We propose a mechanism to protect against these adversarial inputs based on a generative model of the data. We introduce a pre-processing step that projects on the range of a generative model using gradient descent before feeding an input into a classifier. We show that this step provides the classifier with robustness against first-order, substitute model, and combined adversarial attacks. Using a min-max formulation, we show that there may exist adversarial examples even in the range of the generator, natural-looking images extremely close to the decision boundary for which the classifier has unjustifiedly high confidence. We show that adversarial training on the generative manifold can be used to make a classifier that is robust to these attacks. Finally, we show how our method can be applied even without a pre-trained generative model using a recent method called the deep image prior. We evaluate our method on MNIST, CelebA and Imagenet and show robustness against the current state of the art attacks.",
"title": ""
},
{
"docid": "1c5e17c7acff27e3b10aecf15c5809e7",
"text": "Recent years witness a growing interest in nonstandard epistemic logics of “knowing whether”, “knowing what”, “knowing how” and so on. These logics are usually not normal, i.e., the standard axioms and reasoning rules for modal logic may be invalid. In this paper, we show that the conditional “knowing value” logic proposed by Wang and Fan [10] can be viewed as a disguised normal modal logic by treating the negation of Kv operator as a special diamond. Under this perspective, it turns out that the original first-order Kripke semantics can be greatly simplified by introducing a ternary relation R i in standard Kripke models which associates one world with two i-accessible worlds that do not agree on the value of constant c. Under intuitive constraints, the modal logic based on such Kripke models is exactly the one studied by Wang and Fan [10,11]. Moreover, there is a very natural binary generalization of the “knowing value” diamond, which, surprisingly, does not increase the expressive power of the logic. The resulting logic with the binary diamond has a transparent normal modal system which sharpens our understanding of the “knowing value” logic and simplifies some previous hard problems.",
"title": ""
},
{
"docid": "0ee27f9045935db4241e9427bed2af59",
"text": "As a new generation of deep-sea Autonomous Underwater Vehicle (AUV), Qianlong I is a 6000m rated glass deep-sea manganese nodules detection AUV which based on the CR01 and the CR02 deep-sea AUVs and developed by Shenyang Institute of Automation, the Chinese Academy of Sciences from 2010. The Qianlong I was tested in the thousand-isles lake in Zhejiang Province of China during November 2012 to March 2013 and the sea trials were conducted in the South China Sea during April 20-May 2, 2013 after the lake tests and the ocean application completed in October 2013. This paper describes two key problems encountered in the process of developing Qianlong I, including the launch and recovery systems development and variable buoyancy system development. Results from the recent lake and sea trails are presented, and future missions and development plans are discussed.",
"title": ""
},
{
"docid": "98d1c35aeca5de703cec468b2625dc72",
"text": "Congenital adrenal hyperplasia was described in London by Phillips (1887) who reported four cases of spurious hermaphroditism in one family. Fibiger (1905) noticed that there was enlargement of the adrenal glands in some infants who had died after prolonged vomiting and dehydration. Butler, Ross, and Talbot (1939) reported a case which showed serum electrolyte changes similar to those of Addison's disease. Further developments had to await the synthesis of cortisone. The work ofWilkins, Lewis, Klein, and Rosemberg (1950) showed that cortisone could alleviate the disorder and suppress androgen secretion. Bartter, Albright, Forbes, Leaf, Dempsey, and Carroll (1951) suggested that, in congenital adrenal hyperplasia, there might be a primary impairment of synthesis of cortisol (hydrocortisone, compound F) and a secondary rise of pituitary adrenocorticotrophin (ACTH) production. This was confirmed by Jailer, Louchart, and Cahill (1952) who showed that ACTH caused little increase in the output of cortisol in such cases. In the same year, Snydor, Kelley, Raile, Ely, and Sayers (1953) found an increased level ofACTH in the blood of affected patients. Studies of enzyme systems were carried out. Jailer, Gold, Vande Wiele, and Lieberman (1955) and Frantz, Holub, and Jailer (1960) produced evidence that the most common site for the biosynthetic block was in the C-21 hydroxylating system. Eberlein and Bongiovanni (1955) showed that there was a C-l 1 hydroxylation defect in patients with the hypertensive form of congenital adrenal hyperplasia, and Bongiovanni (1961) and Bongiovanni and Kellenbenz (1962), showed that in some patients there was a further type of enzyme defect, a 3-(-hydroxysteroid dehydrogenase deficiency, an enzyme which is required early in the metabolic pathway. Prader and Siebenmann (1957) described a female infant who had adrenal insufficiency and congenital lipoid hyperplasia of the",
"title": ""
},
{
"docid": "2466ac1ce3d54436f74b5bb024f89662",
"text": "In this paper we discuss our work on applying media theory to the creation of narrative augmented reality (AR) experiences. We summarize the concepts of remediation and media forms as they relate to our work, argue for their importance to the development of a new medium such as AR, and present two example AR experiences we have designed using these conceptual tools. In particular, we focus on leveraging the interaction between the physical and virtual world, remediating existing media (film, stage and interactive CD-ROM), and building on the cultural expectations of our users.",
"title": ""
},
{
"docid": "bf03f941bcf921a44d0a34ec2161ee34",
"text": "Epidermolytic ichthyosis (EI) is a rare autosomal dominant genodermatosis that presents at birth as a bullous disease, followed by a lifelong ichthyotic skin disorder. Essentially, it is a defective keratinization caused by mutations of keratin 1 (KRT1) or keratin 10 (KRT10) genes, which lead to skin fragility, blistering, and eventually hyperkeratosis. Successful management of EI in the newborn period can be achieved through a thoughtful, directed, and interdisciplinary or multidisciplinary approach that encompasses family support. This condition requires meticulous care to avoid associated morbidities such as infection and dehydration. A better understanding of the disrupted barrier protection of the skin in these patients provides a basis for management with daily bathing, liberal emollients, pain control, and proper nutrition as the mainstays of treatment. In addition, this case presentation will include discussions on the pathophysiology, complications, differential diagnosis, and psychosocial and ethical issues.",
"title": ""
},
{
"docid": "b8b96789191e5afa48bea1d9e92443d5",
"text": "Methionine, cysteine, homocysteine, and taurine are the 4 common sulfur-containing amino acids, but only the first 2 are incorporated into proteins. Sulfur belongs to the same group in the periodic table as oxygen but is much less electronegative. This difference accounts for some of the distinctive properties of the sulfur-containing amino acids. Methionine is the initiating amino acid in the synthesis of virtually all eukaryotic proteins; N-formylmethionine serves the same function in prokaryotes. Within proteins, many of the methionine residues are buried in the hydrophobic core, but some, which are exposed, are susceptible to oxidative damage. Cysteine, by virtue of its ability to form disulfide bonds, plays a crucial role in protein structure and in protein-folding pathways. Methionine metabolism begins with its activation to S-adenosylmethionine. This is a cofactor of extraordinary versatility, playing roles in methyl group transfer, 5'-deoxyadenosyl group transfer, polyamine synthesis, ethylene synthesis in plants, and many others. In animals, the great bulk of S-adenosylmethionine is used in methylation reactions. S-Adenosylhomocysteine, which is a product of these methyltransferases, gives rise to homocysteine. Homocysteine may be remethylated to methionine or converted to cysteine by the transsulfuration pathway. Methionine may also be metabolized by a transamination pathway. This pathway, which is significant only at high methionine concentrations, produces a number of toxic endproducts. Cysteine may be converted to such important products as glutathione and taurine. Taurine is present in many tissues at higher concentrations than any of the other amino acids. It is an essential nutrient for cats.",
"title": ""
},
{
"docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db",
"text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.",
"title": ""
},
{
"docid": "372182b4ac2681ceedb9d78e9f38343d",
"text": "A 12-bit 10-GS/s interleaved (IL) pipeline analog-to-digital converter (ADC) is described in this paper. The ADC achieves a signal to noise and distortion ratio (SNDR) of 55 dB and a spurious free dynamic range (SFDR) of 66 dB with a 4-GHz input signal, is fabricated in the 28-nm CMOS technology, and dissipates 2.9 W. Eight pipeline sub-ADCs are interleaved to achieve 10-GS/s sample rate, and mismatches between sub-ADCs are calibrated in the background. The pipeline sub-ADCs employ a variety of techniques to lower power, like avoiding a dedicated sample-and-hold amplifier (SHA-less), residue scaling, flash background calibration, dithering and inter-stage gain error background calibration. A push–pull input buffer optimized for high-frequency linearity drives the interleaved sub-ADCs to enable >7-GHz bandwidth. A fast turn-ON bootstrapped switch enables 100-ps sampling. The ADC also includes the ability to randomize the sub-ADC selection pattern to further reduce residual interleaving spurs.",
"title": ""
},
{
"docid": "eb956188486caa595b7f38d262781af7",
"text": "Due to the competitiveness of the computing industry, software developers are pressured to quickly deliver new code releases. At the same time, operators are expected to update and keep production systems stable at all times. To overcome the development–operations barrier, organizations have started to adopt Infrastructure as Code (IaC) tools to efficiently deploy middleware and applications using automation scripts. These automations comprise a series of steps that should be idempotent to guarantee repeatability and convergence. Rigorous testing is required to ensure that the system idempotently converges to a desired state, starting from arbitrary states. We propose and evaluate a model-based testing framework for IaC. An abstracted system model is utilized to derive state transition graphs, based on which we systematically generate test cases for the automation. The test cases are executed in light-weight virtual machine environments. Our prototype targets one popular IaC tool (Chef), but the approach is general. We apply our framework to a large base of public IaC scripts written by operators, showing that it correctly detects non-idempotent automations.",
"title": ""
},
{
"docid": "b3790611437e1660b7c222adcb26b510",
"text": "There have been increasing interests in the robotics community in building smaller and more agile autonomous micro aerial vehicles (MAVs). In particular, the monocular visual-inertial system (VINS) that consists of only a camera and an inertial measurement unit (IMU) forms a great minimum sensor suite due to its superior size, weight, and power (SWaP) characteristics. In this paper, we present a tightly-coupled nonlinear optimization-based monocular VINS estimator for autonomous rotorcraft MAVs. Our estimator allows the MAV to execute trajectories at 2 m/s with roll and pitch angles up to 30 degrees. We present extensive statistical analysis to verify the performance of our approach in different environments with varying flight speeds.",
"title": ""
},
{
"docid": "7f61235bb8b77376936256dcf251ee0b",
"text": "These practical guidelines for the biological treatment of personality disorders in primary care settings were developed by an international Task Force of the World Federation of Societies of Biological Psychiatry (WFSBP). They embody the results of a systematic review of all available clinical and scientific evidence pertaining to the biological treatment of three specific personality disorders, namely borderline, schizotypal and anxious/avoidant personality disorder in addition to some general recommendations for the whole field. The guidelines cover disease definition, classification, epidemiology, course and current knowledge on biological underpinnings, and provide a detailed overview on the state of the art of clinical management. They deal primarily with biological treatment (including antidepressants, neuroleptics, mood stabilizers and some further pharmacological agents) and discuss the relative significance of medication within the spectrum of treatment strategies that have been tested for patients with personality disorders, up to now. The recommendations should help the clinician to evaluate the efficacy spectrum of psychotropic drugs and therefore to select the drug best suited to the specific psychopathology of an individual patient diagnosed for a personality disorder.",
"title": ""
},
{
"docid": "0122057f9fd813efd9f9e0db308fe8d9",
"text": "Noun phrases in queries are identified and classified into four types: proper names, dictionary phrases, simple phrases and complex phrases. A document has a phrase if all content words in the phrase are within a window of a certain size. The window sizes for different types of phrases are different and are determined using a decision tree. Phrases are more important than individual terms. Consequently, documents in response to a query are ranked with matching phrases given a higher priority. We utilize WordNet to disambiguate word senses of query terms. Whenever the sense of a query term is determined, its synonyms, hyponyms, words from its definition and its compound words are considered for possible additions to the query. Experimental results show that our approach yields between 23% and 31% improvements over the best-known results on the TREC 9, 10 and 12 collections for short (title only) queries, without using Web data.",
"title": ""
},
{
"docid": "5416e2a3f5a1855f19814eecec85092a",
"text": "Code clones are exactly or nearly similar code fragments in the code-base of a software system. Existing studies show that clones are directly related to bugs and inconsistencies in the code-base. Code cloning (making code clones) is suspected to be responsible for replicating bugs in the code fragments. However, there is no study on the possibilities of bug-replication through cloning process. Such a study can help us discover ways of minimizing bug-replication. Focusing on this we conduct an empirical study on the intensities of bug-replication in the code clones of the major clone-types: Type 1, Type 2, and Type 3. According to our investigation on thousands of revisions of six diverse subject systems written in two different programming languages, C and Java, a considerable proportion (i.e., up to 10%) of the code clones can contain replicated bugs. Both Type 2 and Type 3 clones have higher tendencies of having replicated bugs compared to Type 1 clones. Thus, Type 2 and Type 3 clones are more important from clone management perspectives. The extent of bug-replication in the buggy clone classes is generally very high (i.e., 100% in most of the cases). We also find that overall 55% of all the bugs experienced by the code clones can be replicated bugs. Our study shows that replication of bugs through cloning is a common phenomenon. Clone fragments having method-calls and if-conditions should be considered for refactoring with high priorities, because such clone fragments have high possibilities of containing replicated bugs. We believe that our findings are important for better maintenance of software systems, in particular, systems with code clones.",
"title": ""
},
{
"docid": "ea95f4475bb65f7ea0f270387919df47",
"text": "The field of supramolecular chemistry focuses on the non-covalent interactions between molecules that give rise to molecular recognition and self-assembly processes. Since most non-covalent interactions are relatively weak and form and break without significant activation barriers, many supramolecular systems are under thermodynamic control. Hence, traditionally, supramolecular chemistry has focused predominantly on systems at equilibrium. However, more recently, self-assembly processes that are governed by kinetics, where the outcome of the assembly process is dictated by the assembly pathway rather than the free energy of the final assembled state, are becoming topical. Within the kinetic regime it is possible to distinguish between systems that reside in a kinetic trap and systems that are far from equilibrium and require a continuous supply of energy to maintain a stationary state. In particular, the latter systems have vast functional potential, as they allow, in principle, for more elaborate structural and functional diversity of self-assembled systems - indeed, life is a prime example of a far-from-equilibrium system. In this Review, we compare the different thermodynamic regimes using some selected examples and discuss some of the challenges that need to be addressed when developing new functional supramolecular systems.",
"title": ""
},
{
"docid": "4d87a5793186fc1dcaa51abcc06135a7",
"text": "PURPOSE OF REVIEW\nArboviruses have been associated with central and peripheral nervous system injuries, in special the flaviviruses. Guillain-Barré syndrome (GBS), transverse myelitis, meningoencephalitis, ophthalmological manifestations, and other neurological complications have been recently associated to Zika virus (ZIKV) infection. In this review, we aim to analyze the epidemiological aspects, possible pathophysiology, and what we have learned about the clinical and laboratory findings, as well as treatment of patients with ZIKV-associated neurological complications.\n\n\nRECENT FINDINGS\nIn the last decades, case series have suggested a possible link between flaviviruses and development of GBS. Recently, large outbreaks of ZIKV infection in Asia and the Americas have led to an increased incidence of GBS in these territories. Rapidly, several case reports and case series have reported an increase of all clinical forms and electrophysiological patterns of GBS, also including cases with associated central nervous system involvement. Finally, cases suggestive of acute transient polyneuritis, as well as acute and progressive postinfectious neuropathies associated to ZIKV infection have been reported, questioning the usually implicated mechanisms of neuronal injury.\n\n\nSUMMARY\nThe recent ZIKV outbreaks have triggered the occurrence of a myriad of neurological manifestations likely associated to this arbovirosis, in special GBS and its variants.",
"title": ""
},
{
"docid": "f312bfe7f80fdf406af29bfde635fa36",
"text": "In two studies, a newly devised test (framed-line test) was used to examine the hypothesis that individuals engaging in Asian cultures are more capable of incorporating contextual information and those engaging in North American cultures are more capable of ignoring contextual information. On each trial, participants were presented with a square frame, within which was printed a vertical line. Participants were then shown another square frame of the same or different size and asked to draw a line that was identical to the first line in either absolute length (absolute task) or proportion to the height of the surrounding frame (relative task). The results supported the hypothesis: Whereas Japanese were more accurate in the relative task, Americans were more accurate in the absolute task. Moreover, when engaging in another culture, individuals tended to show the cognitive characteristic common in the host culture.",
"title": ""
},
{
"docid": "b213afb537bbc4c476c760bb8e8f2997",
"text": "Recommender system has been demonstrated as one of the most useful tools to assist users' decision makings. Several recommendation algorithms have been developed and implemented by both commercial and open-source recommendation libraries. Context-aware recommender system (CARS) emerged as a novel research direction during the past decade and many contextual recommendation algorithms have been proposed. Unfortunately, no recommendation engines start to embed those algorithms in their kits, due to the special characteristics of the data format and processing methods in the domain of CARS. This paper introduces an open-source Java-based context-aware recommendation engine named as CARSKit which is recognized as the 1st open source recommendation library specifically designed for CARS. It implements the state-of-the-art context-aware recommendation algorithms, and we will showcase the ease with which CARSKit allows recommenders to be configured and evaluated in this demo.",
"title": ""
},
{
"docid": "101c03b85e3cc8518a158d89cc9b3b39",
"text": "Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.",
"title": ""
}
] |
scidocsrr
|
d3516e87c5db3ab802e14d7b9a273fe6
|
Bayesian Policy Gradients via Alpha Divergence Dropout Inference
|
[
{
"docid": "ba1368e4acc52395a8e9c5d479d4fe8f",
"text": "This talk will present an overview of our recent research on distributional reinforcement learning. Our starting point is our recent ICML paper, in which we argued for the fundamental importance of the value distribution: the distribution of random returns received by a reinforcement learning agent. This is in contrast to the common approach, which models the expectation of this return, or value. Back then, we were able to design a new algorithm that learns the value distribution through a TD-like bootstrap process and achieved state-of-the-art performance on games from the Arcade Learning Environment (ALE). However, this left open the question as to why the distributional approach should perform better at all. We’ve since delved deeper into what makes distributional RL work: first by improving the original using quantile regression, which directly minimizes the Wasserstein metric; and second by unearthing surprising connections between the original C51 algorithm and the distant cousin of the Wasserstein metric, the Cramer distance.",
"title": ""
},
{
"docid": "4fc6ac1b376c965d824b9f8eb52c4b50",
"text": "Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.",
"title": ""
},
{
"docid": "16915e2da37f8cd6fa1ce3a4506223ff",
"text": "In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.",
"title": ""
}
] |
[
{
"docid": "83a968fcd2d77de796a8161b6dead9bc",
"text": "We introduce a deep learning-based method to generate full 3D hair geometry from an unconstrained image. Our method can recover local strand details and has real-time performance. State-of-the-art hair modeling techniques rely on large hairstyle collections for nearest neighbor retrieval and then perform ad-hoc refinement. Our deep learning approach, in contrast, is highly efficient in storage and can run 1000 times faster while generating hair with 30K strands. The convolutional neural network takes the 2D orientation field of a hair image as input and generates strand features that are evenly distributed on the parameterized 2D scalp. We introduce a collision loss to synthesize more plausible hairstyles, and the visibility of each strand is also used as a weight term to improve the reconstruction accuracy. The encoder-decoder architecture of our network naturally provides a compact and continuous representation for hairstyles, which allows us to interpolate naturally between hairstyles. We use a large set of rendered synthetic hair models to train our network. Our method scales to real images because an intermediate 2D orientation field, automatically calculated from the real image, factors out the difference between synthetic and real hairs. We demonstrate the effectiveness and robustness of our method on a wide range of challenging real Internet pictures, and show reconstructed hair sequences from videos.",
"title": ""
},
{
"docid": "b1138def2e8c5206eecc9cefa5a7c901",
"text": "Soft robots have recently demonstrated impressive abilities to adapt to objects and their environment with limited sensing and actuation. However, mobile soft robots are typically fabricated using laborious molding processes that result in limited actuated degrees of freedom and hence limited locomotion capabilities. In this paper, we present a 3D printed robot with bellowed soft legs capable of rotation about two axes. This allows our robot to navigate rough terrain that previously posed a significant challenge to soft robots. We present models and FEM simulations for the soft leg modules and predict the robot locomotion capabilities. We use finite element analysis to simulate the actuation characteristics of these modules. We then compared the analytical and computational results to experimental results with a tethered prototype. The experimental soft robot is capable of lifting its legs 5.3 cm off the ground and is able to walk at speeds up to 20 mm/s (0.13 bl/s). This work represents a practical approach to the design and fabrication of functional mobile soft robots.",
"title": ""
},
{
"docid": "08b2b3539a1b10f7423484946121ed50",
"text": "BACKGROUND\nCatheter ablation of persistent atrial fibrillation yields an unsatisfactorily high number of failures. The hybrid approach has recently emerged as a technique that overcomes the limitations of both surgical and catheter procedures alone.\n\n\nMETHODS AND RESULTS\nWe investigated the sequential (staged) hybrid method, which consists of a surgical thoracoscopic radiofrequency ablation procedure followed by radiofrequency catheter ablation 6 to 8 weeks later using the CARTO 3 mapping system. Fifty consecutive patients (mean age 62±7 years, 32 males) with long-standing persistent atrial fibrillation (41±34 months) and a dilated left atrium (>45 mm) were included and prospectively followed in an unblinded registry. During the electrophysiological part of the study, all 4 pulmonary veins were found to be isolated in 36 (72%) patients and a complete box-lesion was confirmed in 14 (28%) patients. All gaps were successfully re-ablated. Twelve months after the completed hybrid ablation, 47 patients (94%) were in normal sinus rhythm (4 patients with paroxysmal atrial fibrillation required propafenone and 1 patient underwent a redo catheter procedure). The majority of arrhythmias recurred during the first 3 months. Beyond 12 months, there were no arrhythmia recurrences detected. The surgical part of the procedure was complicated by 7 (13.7%) major complications, while no serious adverse events were recorded during the radiofrequency catheter part of the procedure.\n\n\nCONCLUSIONS\nThe staged hybrid epicardial-endocardial treatment of long-standing persistent atrial fibrillation seems to be extremely effective in maintenance of normal sinus rhythm compared to radiofrequency catheter or surgical ablation alone. Epicardial ablation alone cannot guarantee durable transmural lesions.\n\n\nCLINICAL TRIAL REGISTRATION\nURL: www.ablace.cz Unique identifier: cz-060520121617.",
"title": ""
},
{
"docid": "0c9bfe01ed4e0f35ff30041db17b6487",
"text": "We demonstrate a system for tracking and analyzing moods of bloggers worldwide, as reflected in the largest blogging community, LiveJournal. Our system collects thousands of blog posts every hour, performs various analyses on the posts and presents the results graphically. Exploring the Blogspace From the point of view of information access, the blogspace offers many natural opportunities beyond traditional search facilities, such as trend detection, topic tracking, link tracking, feed generation, etc. But there is more. Many blog authoring environments allow bloggers to tag their entries with highly individual (and personal) features. Users of LiveJournal, currently the largest weblog community, have the option of reporting theirmoodat the time of the post; users can either select a mood from a predefined list of 132 common moods such as “amused” or “angry,” or enter free-text. A large percentage of LiveJournal users chooses to utilize this option, tagging their postings with a mood. This results in a stream of hundreds of weblog posts tagged with mood information per minute, from hundreds of thousands of different users across the globe. Our focus in this demo is on providing access to the blogspace using moods as the “central” dimension. The type of information needs that we are interested in are best illustrated by questions such as: How do moods develop? How are they related? How do global events impact moods? And: Can global mood swings be traced back to global events? We describe MoodViews, a collection of tools for analyzing, tracking and visualizing moods and mood changes in blogs posted by LiveJournal users.",
"title": ""
},
{
"docid": "945b2067076bd47485b39c33fb062ec1",
"text": "Computation of floating-point transcendental functions has a relevant importance in a wide variety of scientific applications, where the area cost, error and latency are important requirements to be attended. This paper describes a flexible FPGA implementation of a parameterizable floating-point library for computing sine, cosine, arctangent and exponential functions using the CORDIC algorithm. The novelty of the proposed architecture is that by sharing the same resources the CORDIC algorithm can be used in two operation modes, allowing it to compute the sine, cosine or arctangent functions. Additionally, in case of the exponential function, the architectures change automatically between the CORDIC or a Taylor approach, which helps to improve the precision characteristics of the circuit, specifically for small input values after the argument reduction. Synthesis of the circuits and an experimental analysis of the errors have demonstrated the correctness and effectiveness of the implemented cores and allow the designer to choose, for general-purpose applications, a suitable bit-width representation and number of iterations of the CORDIC algorithm.",
"title": ""
},
{
"docid": "352b850c526fd562c5d0c43dfea533f5",
"text": "Social network has lately shown an important impact in both scientific and social societies and is considered a highly weighted source of information nowadays. Due to its noticeable significance, several research movements were introduced in this domain including: Location-Based Social Networks (LBSN), Recommendation Systems, Sentiment Analysis Applications, and many others. Location Based Recommendation systems are among the highly required applications for predicting human mobility based on users' social ties as well as their spatial preferences. In this paper we introduce a trust based recommendation algorithm that addresses the problem of recommending locations based on both users' interests as well as social trust among users. In our study we use two real LBSN, Gowalla and Brightkite that include the social relationships among users as well as data about their visited locations. Experiments showing the performance of the proposed trust based recommendation algorithm are also presented.",
"title": ""
},
{
"docid": "5bef975924d427c3ae186d92a93d4f74",
"text": "The Voronoi diagram of a set of sites partitions space into regions, one per site; the region for a site s consists of all points closer to s than to any other site. The dual of the Voronoi diagram, the Delaunay triangulation, is the unique triangulation such that the circumsphere of every simplex contains no sites in its interior. Voronoi diagrams and Delaunay triangulations have been rediscovered or applied in many areas of mathematics and the natural sciences; they are central topics in computational geometry, with hundreds of papers discussing algorithms and extensions. Section 27.1 discusses the definition and basic properties in the usual case of point sites in R with the Euclidean metric, while Section 27.2 gives basic algorithms. Some of the many extensions obtained by varying metric, sites, environment, and constraints are discussed in Section 27.3. Section 27.4 finishes with some interesting and nonobvious structural properties of Voronoi diagrams and Delaunay triangulations.",
"title": ""
},
{
"docid": "19f3720d0077783554b6d9cd71e95c48",
"text": "Radical prostatectomy is performed on approximately 40% of men with organ-confined prostate cancer. Pathologic information obtained from the prostatectomy specimen provides important prognostic information and guides recommendations for adjuvant treatment. The current pathology protocol in most centers involves primarily qualitative assessment. In this paper, we describe and evaluate our system for automatic prostate cancer detection and grading on hematoxylin & eosin-stained tissue images. Our approach is intended to address the dual challenges of large data size and the need for high-level tissue information about the locations and grades of tumors. Our system uses two stages of AdaBoost-based classification. The first provides high-level tissue component labeling of a superpixel image partitioning. The second uses the tissue component labeling to provide a classification of cancer versus noncancer, and low-grade versus high-grade cancer. We evaluated our system using 991 sub-images extracted from digital pathology images of 50 whole-mount tissue sections from 15 prostatectomy patients. We measured accuracies of 90% and 85% for the cancer versus noncancer and high-grade versus low-grade classification tasks, respectively. This system represents a first step toward automated cancer quantification on prostate digital histopathology imaging, which could pave the way for more accurately informed postprostatectomy patient care.",
"title": ""
},
{
"docid": "838ef5791a8c127f11a53406cf5599d0",
"text": "Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.",
"title": ""
},
{
"docid": "6f8e565aff657cbc1b65217d72ead3ab",
"text": "This paper explores patterns of adoption and use of information and communications technology (ICT) by small and medium sized enterprises (SMEs) in the southwest London and Thames Valley region of England. The paper presents preliminary results of a survey of around 400 SMEs drawn from four economically significant sectors in the region: food processing, transport and logistics, media and Internet services. The main objectives of the study were to explore ICT adoption and use patterns by SMEs, to identify factors enabling or inhibiting the successful adoption and use of ICT, and to explore the effectiveness of government policy mechanisms at national and regional levels. While our main result indicates a generally favourable attitude to ICT amongst the SMEs surveyed, it also suggests a failure to recognise ICT’s strategic potential. A surprising result was the overwhelming ignorance of regional, national and European Union wide policy initiatives to support SMEs. This strikes at the very heart of regional, national and European policy that have identified SMEs as requiring specific support mechanisms. Our findings from one of the UK’s most productive regions therefore have important implications for policy aimed at ICT adoption and use by SMEs.",
"title": ""
},
{
"docid": "412b616f4fcb9399c8220c542ecac83e",
"text": "Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods.",
"title": ""
},
{
"docid": "0dfd5345c2dc3fe047dcc635760ffedd",
"text": "This paper presents a fast, joint spatial- and Doppler velocity-based, probabilistic approach for ego-motion estimation for single and multiple radar-equipped robots. The normal distribution transform is used for the fast and accurate position matching of consecutive radar detections. This registration technique is successfully applied to laser-based scan matching. To overcome discontinuities of the original normal distribution approach, an appropriate clustering technique provides a globally smooth mixed-Gaussian representation. It is shown how this matching approach can be significantly improved by taking the Doppler information into account. The Doppler information is used in a density-based approach to extend the position matching to a joint likelihood optimization function. Then, the estimated ego-motion maximizes this function. Large-scale real world experiments in an urban environment using a 77 GHz radar show the robust and accurate ego-motion estimation of the proposed algorithm. In the experiments, comparisons are made to state-of-the-art algorithms, the vehicle odometry, and a high-precision inertial measurement unit.",
"title": ""
},
{
"docid": "bee4b2dfab47848e8429d4b4617ec9e5",
"text": "Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).",
"title": ""
},
{
"docid": "46df34ed9fb6abcc0e6250972fca1faa",
"text": "Reliable, scalable and secured framework for predicting Heart diseases by mining big data is designed. Components of Apache Hadoop are used for processing of big data used for prediction. For increasing the performance, scalability, and reliability Hadoop clusters are deployed on Google Cloud Storage. Mapreduce based Classification via clustering method is proposed for efficient classification of instances using reduced attributes. Mapreduce based C 4.5 decision tree algorithm is improved and implemented to classify the instances. Datasets are analyzed on WEKA (Waikato Environment for Knowledge Analysis) and Hadoop. Classification via clustering method performs classification with 98.5% accuracy on WEKA with reduced attributes. On Mapreduce paradigm using this approach execution time is improved. With clustered instances 49 nodes of decision tree are reduced to 32 and execution time of Mapreduce program is reduced from 113 seconds to 84 seconds. Mapreduce based decision trees present classification of instances more accurately as compared to WEKA based decision trees.",
"title": ""
},
{
"docid": "c5b9053b1b22d56dd827009ef529004d",
"text": "An integrated receiver with high sensitivity and low walk error for a military purpose pulsed time-of-flight (TOF) LADAR system is proposed. The proposed receiver adopts a dual-gain capacitive-feedback TIA (C-TIA) instead of widely used resistive-feedback TIA (R-TIA) to increase the sensitivity. In addition, a new walk-error improvement circuit based on a constant-delay detection method is proposed. Implemented in 0.35 μm CMOS technology, the receiver achieves an input-referred noise current of 1.36 pA/√Hz with bandwidth of 140 MHz and minimum detectable signal (MDS) of 10 nW with a 5 ns pulse at SNR=3.3, maximum walk-error of 2.8 ns, and a dynamic range of 1:12,000 over the operating temperature range of -40 °C to +85 °C.",
"title": ""
},
{
"docid": "31d66211511ae35d71c7055a2abf2801",
"text": "BACKGROUND\nPrevious evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training.\n\n\nCONCLUSION/SIGNIFICANCE\nCognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.",
"title": ""
},
{
"docid": "b5bb4e9e131cb4895ee1b22c60f9e0c8",
"text": "This paper proposes an eye state detection system using Haar Cascade Classifier and Circular Hough Transform. Our proposed system first detects the face and then the eyes using Haar Cascade Classifiers, which differentiate between opened and closed eyes. Circular Hough Transform (CHT) is used to detect the circular shape of the eye and make sure that the eye is detected correctly by the classifiers. The accuracy of the eye detection is 98.56% on our database which contains 2856 images for opened eye and 2384 images for closed eye. The system works on several stages and is fully automatic. The eye state detection system was tested by several people, and the accuracy of the proposed system is 96.96%.",
"title": ""
},
{
"docid": "ff59d1ec0c3eb11b3201e5708a585ca4",
"text": "In this paper, we described our system for Knowledge Base Acceleration (KBA) Track at TREC 2013. The KBA Track has two tasks, CCR and SSF. Our approach consists of two major steps: selecting documents and extracting slot values. Selecting documents is to look for and save the documents that mention the entities of interest. The second step involves with generating seed patterns to extract the slot values and computing confidence score.",
"title": ""
}
] |
scidocsrr
|
606869cd81b4aaf23f4f05117f8765c4
|
Lexico-syntactic text simplification and compression with typed dependencies
|
[
{
"docid": "52ebff6e9509b27185f9f12bc65d86f8",
"text": "We address the problem of simplifying Portuguese texts at the sentence level by treating it as a \"translation task\". We use the Statistical Machine Translation (SMT) framework to learn how to translate from complex to simplified sentences. Given a parallel corpus of original and simplified texts, aligned at the sentence level, we train a standard SMT system and evaluate the \"translations\" produced using both standard SMT metrics like BLEU and manual inspection. Results are promising according to both evaluations, showing that while the model is usually overcautious in producing simplifications, the overall quality of the sentences is not degraded and certain types of simplification operations, mainly lexical, are appropriately captured.",
"title": ""
},
{
"docid": "a93969b08efbc81c80129790d93e39de",
"text": "Text simplification aims to rewrite text into simpler versions, and thus make information accessible to a broader audience. Most previous work simplifies sentences using handcrafted rules aimed at splitting long sentences, or substitutes difficult words using a predefined dictionary. This paper presents a datadriven model based on quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. We describe how such a grammar can be induced from Wikipedia and propose an integer linear programming model for selecting the most appropriate simplification from the space of possible rewrites generated by the grammar. We show experimentally that our method creates simplifications that significantly reduce the reading difficulty of the input, while maintaining grammaticality and preserving its meaning.",
"title": ""
},
{
"docid": "3909409a40aef1d1b6fea5b8a920a707",
"text": "Lexical and syntactic simplification aim to make texts more accessible to certain audiences. Syntactic simplification uses either hand-crafted linguistic rules for deep syntactic transformations, or machine learning techniques to model simpler transformations. Lexical simplification performs a lookup for synonyms followed by context and/or frequency-based models. In this paper we investigate modelling both syntactic and lexical simplification through the learning of general tree transduction rules. Experiments with the Simple English Wikipedia corpus show promising results but highlight the need for clever filtering strategies to remove noisy transformations. Resumo. A simplificação em nı́vel lexical e sintático objetiva tornar textos mais acessı́veis a certos públicos-alvo. Simplificação em nı́vel sintático usa regras confeccionadas manualmente para empregar transformações sintáticas, ou técnicas de aprendizado de máquina para modelar transformações mais simples. Simplificação em nı́vel lexical emprega busca por sinônimos para palavras complexas seguida por análise de contexto e/ou busca em modelos de frequência de palavras. Neste trabalho investiga-se a modelagem de ambas estratégias de simplificação em nı́vel sintático e lexical pelo aprendizado de regras através da transdução de árvores. Experimentos com dados da Simple English Wikipedia mostram resultados promissores, porém destacam a necessidade de estratégias inteligentes de filtragem para remover transformações ruidosas.",
"title": ""
}
] |
[
{
"docid": "c5bf370e5369fb30905b5e5f73528b6c",
"text": "Mars rovers have to this point been almost completely reliant on the solar panel/rechargeable battery combination as a source of power. Curiosity, currently en route, relies on radio isotope decay as its source of electrical power. Given the limited amount of space available for solar panels and that the wattage available from radioisotope decay is limited; power is clearly a critical resource for any rover. The goal of this work is to estimate the energy cost of traversing terrains of various properties. Knowledge of energy costs for terrain traversal will allow for more efficient path planning enabling rovers to have longer periods of activity. Our system accepts grid-based terrain elevation data in the form of a Digital Elevation Model (DEM) along with rover and soil parameters, and uses a newly developed model of the most common rover suspension design (rocker-bogie) along with a terramechanics-based wheel-soil interaction model to build a map of the estimated torque required by each wheel to move the rover to each adjacent terrain square. Future work will involve real world testing and verification of our model.",
"title": ""
},
{
"docid": "065ca3deb8cb266f741feb67e404acb5",
"text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet",
"title": ""
},
{
"docid": "2742db8262616f2b69d92e0066e6930c",
"text": "Most of previous work in knowledge base (KB) completion has focused on the problem of relation extraction. In this work, we focus on the task of inferring missing entity type instances in a KB, a fundamental task for KB competition yet receives little attention. Due to the novelty of this task, we construct a large-scale dataset and design an automatic evaluation methodology. Our knowledge base completion method uses information within the existing KB and external information from Wikipedia. We show that individual methods trained with a global objective that considers unobserved cells from both the entity and the type side gives consistently higher quality predictions compared to baseline methods. We also perform manual evaluation on a small subset of the data to verify the effectiveness of our knowledge base completion methods and the correctness of our proposed automatic evaluation method.",
"title": ""
},
{
"docid": "a0f4b7f3f9f2a5d430a3b8acead2b746",
"text": "Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval and Fusion Network (Craft), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. Craft explicitly predicts a temporal-layout of mentioned entities (characters and objects), retrieves spatio-temporal entity segments from a video database and fuses them to generate scene videos. Our contributions include sequential training of components of Craft while jointly modeling layout and appearances, and losses that encourage learning compositional representations for retrieval. We evaluate Craft on semantic fidelity to caption, composition consistency, and visual quality. Craft outperforms direct pixel generation approaches and generalizes well to unseen captions and to unseen video databases with no text annotations. We demonstrate Craft on Flintstones, a new richly annotated video-caption dataset with over 25000 videos. For a glimpse of videos generated by Craft, see https://youtu.be/688Vv86n0z8. Fred wearing a red hat is walking in the living room Retrieve Compose Retrieve Compose Retrieve Pebbles is sitting at a table in a room watching the television Retrieve Compose Retrieve Compose Compose Retrieve Retrieve Fuse",
"title": ""
},
{
"docid": "9624ce8061b8476d7fe8d61ef3b565b8",
"text": "The availability of high-resolution remote sensing (HRRS) data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN) can be applied to multispectral orthoimagery and a digital surface model (DSM) of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water), and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.",
"title": ""
},
{
"docid": "9b13225d4a51419578362a38f22b9c9c",
"text": "Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems.",
"title": ""
},
{
"docid": "cfec098f84e157a2e12f0ff40551c977",
"text": "In this paper, an online news recommender system for the popular social network, Facebook, is described. This system provides daily newsletters for communities on Facebook. The system fetches the news articles and filters them based on the community description to prepare the daily news digest. Explicit survey feedback from the users show that most users found the application useful and easy to use. They also indicated that they could get some community specific articles that they would not have got otherwise.",
"title": ""
},
{
"docid": "bf2065f6c04f566110667a22a9d1b663",
"text": "Casticin, a polymethoxyflavone occurring in natural plants, has been shown to have anticancer activities. In the present study, we aims to investigate the anti-skin cancer activity of casticin on melanoma cells in vitro and the antitumor effect of casticin on human melanoma xenografts in nu/nu mice in vivo. A flow cytometric assay was performed to detect expression of viable cells, cell cycles, reactive oxygen species production, levels of [Formula: see text] and caspase activity. A Western blotting assay and confocal laser microscope examination were performed to detect expression of protein levels. In the in vitro studies, we found that casticin induced morphological cell changes and DNA condensation and damage, decreased the total viable cells, and induced G2/M phase arrest. Casticin promoted reactive oxygen species (ROS) production, decreased the level of [Formula: see text], and promoted caspase-3 activities in A375.S2 cells. The induced G2/M phase arrest indicated by the Western blotting assay showed that casticin promoted the expression of p53, p21 and CHK-1 proteins and inhibited the protein levels of Cdc25c, CDK-1, Cyclin A and B. The casticin-induced apoptosis indicated that casticin promoted pro-apoptotic proteins but inhibited anti-apoptotic proteins. These findings also were confirmed by the fact that casticin promoted the release of AIF and Endo G from mitochondria to cytosol. An electrophoretic mobility shift assay (EMSA) assay showed that casticin inhibited the NF-[Formula: see text]B binding DNA and that these effects were time-dependent. In the in vivo studies, results from immuno-deficient nu/nu mice bearing the A375.S2 tumor xenograft indicated that casticin significantly suppressed tumor growth based on tumor size and weight decreases. Early G2/M arrest and mitochondria-dependent signaling contributed to the apoptotic A375.S2 cell demise induced by casticin. In in vivo experiments, A375.S2 also efficaciously suppressed tumor volume in a xenotransplantation model. Therefore, casticin might be a potential therapeutic agent for the treatment of skin cancer in the future.",
"title": ""
},
{
"docid": "8e5cbfe1056a75b1116c93d780c00847",
"text": "We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.",
"title": ""
},
{
"docid": "68a77338227063ce4880eb0fe98a3a92",
"text": "Mammalian microRNAs (miRNAs) have recently been identified as important regulators of gene expression, and they function by repressing specific target genes at the post-transcriptional level. Now, studies of miRNAs are resolving some unsolved issues in immunology. Recent studies have shown that miRNAs have unique expression profiles in cells of the innate and adaptive immune systems and have pivotal roles in the regulation of both cell development and function. Furthermore, when miRNAs are aberrantly expressed they can contribute to pathological conditions involving the immune system, such as cancer and autoimmunity; they have also been shown to be useful as diagnostic and prognostic indicators of disease type and severity. This Review discusses recent advances in our understanding of both the intended functions of miRNAs in managing immune cell biology and their pathological roles when their expression is dysregulated.",
"title": ""
},
{
"docid": "9027d974a3bb5c48c1d8f3103e6035d6",
"text": "The creation of memories about real-life episodes requires rapid neuronal changes that may appear after a single occurrence of an event. How is such demand met by neurons in the medial temporal lobe (MTL), which plays a fundamental role in episodic memory formation? We recorded the activity of MTL neurons in neurosurgical patients while they learned new associations. Pairs of unrelated pictures, one of a person and another of a place, were used to construct a meaningful association modeling the episodic memory of meeting a person in a particular place. We found that a large proportion of responsive MTL neurons expanded their selectivity to encode these specific associations within a few trials: cells initially responsive to one picture started firing to the associated one but not to others. Our results provide a plausible neural substrate for the inception of associations, which are crucial for the formation of episodic memories.",
"title": ""
},
{
"docid": "5f1684f33bb1821cfa6470c470feceea",
"text": "In this paper, a new approach is proposed for automated software maintenance. The tool is able to perform 26 different refactorings. It also contains a large selection of metrics to measure the impact of the refactorings on the software and six different search based optimization algorithms to improve the software. This tool contains both monoobjective and multi-objective search techniques for software improvement and is fully automated. The paper describes the various capabilities of the tool, the unique aspects of it, and also presents some research results from experimentation. The individual metrics are tested across five different codebases to deduce the most effective metrics for general quality improvement. It is found that the metrics that relate to more specific elements of the code are more useful for driving change in the search. The mono-objective genetic algorithm is also tested against the multi-objective algorithm to see how comparable the results gained are with three separate objectives. When comparing the best solutions of each individual objective the multi-objective approach generates suitable improvements in quality in less time, allowing for rapid maintenance cycles.",
"title": ""
},
{
"docid": "5221a4982626902388540ba95f5a57c3",
"text": "In this chapter, event-based control approaches for microalgae culture in industrial reactors are evaluated. Those control systems are applied to regulate the microalgae culture growth conditions such as pH and dissolved oxygen concentration. The analyzed event-based control systems deal with sensor and actuator deadbands approaches in order to provide the desired properties of the controller. Additionally, a selective event-based scheme is evaluated for simultaneous control of pH and dissolved oxygen. In such configurations, the event-based approach provides the possibility to adapt the control system actions to the dynamic state of the controlled bioprocess. In such a way, the event-based control algorithm allows to establish a tradeoff between control performance and number of process update actions. This fact can be directly related with reduction of CO2 injection times, what is also reflected in CO2 losses. The application of selective event-based scheme allows the improved biomass productivity, since the controlled variables are kept within the limits for an optimal photosynthesis rate. Moreover, such a control scheme allows effective CO2 utilization and aeration system energy minimization. The analyzed control system configurations are evaluated for both tubular and raceway photobioreactors to proove its viability for different reactor configurations as well as control system objectives. Additionally, control performance indexes have been used to show the efficiency of the event-based control approaches. The obtained results demonA. Pawlowski (✉) ⋅ S. Dormido Department of Computer Science and Automatic Control, UNED, Madrid, Spain e-mail: [email protected] S. Dormido e-mail: [email protected] J.L. Guzmán ⋅ M. Berenguel Department of Informatics, University of Almería, ceiA3, CIESOL, Almería, Spain e-mail: [email protected]",
"title": ""
},
{
"docid": "90b1d0a8670e74ff3549226acd94973e",
"text": "Language identification is the task of automatically detecting the language(s) present in a document based on the content of the document. In this work, we address the problem of detecting documents that contain text from more than one language (multilingual documents). We introduce a method that is able to detect that a document is multilingual, identify the languages present, and estimate their relative proportions. We demonstrate the effectiveness of our method over synthetic data, as well as real-world multilingual documents collected from the web.",
"title": ""
},
{
"docid": "a00cc13a716439c75a5b785407b02812",
"text": "A novel current feedback programming principle and circuit architecture are presented, compatible with LED displays utilizing the 2T1C pixel structure. The new pixel programming approach is compatible with all TFT backplane technologies and can compensate for non-uniformities in both threshold voltage and carrier mobility of the OLED pixel drive TFT, due to a feedback loop that modulates the gate of the driving transistor according to the OLED current. The circuit can be internal or external to the integrated display data driver. Based on simulations and data gathered through a fabricated prototype driver, a pixel drive current of 20 nA can be programmed within an addressing time ranging from 10 μs to 50 μs.",
"title": ""
},
{
"docid": "5305e147b2aa9646366bc13deb0327b0",
"text": "This longitudinal case-study aimed at examining whether purposely teaching for the promotion of higher order thinking skills enhances students’ critical thinking (CT), within the framework of science education. Within a pre-, post-, and post–post experimental design, high school students, were divided into three research groups. The experimental group (n=57) consisted of science students who were exposed to teaching strategies designed for enhancing higher order thinking skills. Two other groups: science (n=41) and non-science majors (n=79), were taught traditionally, and acted as control. By using critical thinking assessment instruments, we have found that the experimental group showed a statistically significant improvement on critical thinking skills components and disposition towards critical thinking subscales, such as truth-seeking, open-mindedness, self-confidence, and maturity, compared with the control groups. Our findings suggest that if teachers purposely and persistently practice higher order thinking strategies for example, dealing in class with real-world problems, encouraging open-ended class discussions, and fostering inquiry-oriented experiments, there is a good chance for a consequent development of critical thinking capabilities.",
"title": ""
},
{
"docid": "c90f5a4a34bb7998208c4c134bbab327",
"text": "Most existing studies in text-to-SQL tasks do not require generating complex SQL queries with multiple clauses or sub-queries, and generalizing to new, unseen databases. In this paper we propose SyntaxSQLNet, a syntax tree network to address the complex and crossdomain text-to-SQL generation task. SyntaxSQLNet employs a SQL specific syntax tree-based decoder with SQL generation path history and table-aware column attention encoders. We evaluate SyntaxSQLNet on a new large-scale text-to-SQL corpus containing databases with multiple tables and complex SQL queries containing multiple SQL clauses and nested queries. We use a database split setting where databases in the test set are unseen during training. Experimental results show that SyntaxSQLNet can handle a significantly greater number of complex SQL examples than prior work, outperforming the previous state-of-the-art model by 9.5% in exact matching accuracy. To our knowledge, we are the first to study this complex text-to-SQL task. Our task and models with the latest updates are available at https://yale-lily. github.io/seq2sql/spider.",
"title": ""
},
{
"docid": "5fe589e370271246b55aa3b100595f01",
"text": "Cluster-based distributed file systems generally have a single master to service clients and manage the namespace. Although simple and efficient, that design compromises availability, because the failure of the master takes the entire system down. Before version 2.0.0-alpha, the Hadoop Distributed File System (HDFS) -- an open-source storage, widely used by applications that operate over large datasets, such as MapReduce, and for which an uptime of 24x7 is becoming essential -- was an example of such systems. Given that scenario, this paper proposes a hot standby for the master of HDFS achieved by (i) extending the master's state replication performed by its check pointer helper, the Backup Node, and by (ii) introducing an automatic fail over mechanism. The step (i) took advantage of the message duplication technique developed by other high availability solution for HDFS named Avatar Nodes. The step (ii) employed another Hadoop software: ZooKeeper, a distributed coordination service. That approach resulted in small code changes, 1373 lines, not requiring external components to the Hadoop project. Thus, easing the maintenance and deployment of the file system. Compared to HDFS 0.21, tests showed that both in loads dominated by metadata operations or I/O operations, the reduction of data throughput is no more than 15% on average, and the time to switch the hot standby to active is less than 100 ms. Those results demonstrate the applicability of our solution to real systems. We also present related work on high availability for other file systems and HDFS, including the official solution, recently included in HDFS 2.0.0-alpha.",
"title": ""
},
{
"docid": "160058dae12ea588352f5015483081fc",
"text": "Semiotics is the study of signs. Signs take the form of words, images, sounds, odours, flavours, acts or objects but such things have no intrinsic meaning and become signs only when we invest them with meaning. ‘Nothing is a sign unless it is interpreted as a sign,’ declares Peirce (Peirce, 1931). The two dominant models of a sign are the linguist Ferdinand de Saussure and the philosopher Charles Sanders Peirce. This paper attempts to study the role of semiotics in linguistics. How signs play an important role in studying the language? Index: Semioticstheory of signs and symbols Semanticsstudy of sentences Denotataan actual object referred to by a linguistic expression Divergentmove apart in different directions Linguisticsscientific study of language --------------------------------------------------------------------------------------------Introduction: Semiotics or semiology is the study of sign processes or signification and communication, signs and symbols. It is divided into the three following branches: Semantics: Relation between signs and the things to which they refer; their denotata Syntactics: Relations among signs in formal structures Pragmatics: Relation between signs and their effects on people who use them Syntactics is the branch of semiotics that deals with the formal properties of signs and symbols. It deals with the rules that govern how words are combined to form phrases and sentences. According to Charles Morris “semantics deals with the relation of signs to their designate and the objects which they may or do denote” (Foundations of the theory of science, 1938); and, pragmatics deals with the biotic aspects of semiosis, that is, with all the psychological, biological and sociological phenomena which occur in the functioning of signs. The term, which was spelled semeiotics was first used in English by Henry Stubbes in a very precise sense to denote the branch of medical science relating to the interpretation of signs. Semiotics is not widely institutionalized as an academic discipline. It is a field of study involving many different theoretical stances and methodological tools. One of the broadest definitions is that of Umberto Eco, who states that ‘semiotics is concerned with everything that can be taken as a sign’ (A Theory of Semiotics, 1979). Semiotics involves the study not only of what we refer to as ‘signs’ in everyday speech, but of anything which ‘stands for’ something else. In a semiotic sense, signs take the form of words, images, sounds, gestures and objects. Whilst for the linguist Saussure, ‘semiology’ was ‘a science which studies the role of signs as part of social life’, (Nature of the linguistics sign, 1916) for the philosopher Charles Pierce ‘semiotic’ was the ‘formal doctrine of signs’ which was closely related to logic. For him, ‘a sign... is something which stands to somebody for something in some respect or capacity’. He declared that ‘every thought is a sign.’ Literature review: Semiotics is often employed in the analysis of texts, although it is far more than just a mode of textual analysis. Here it should perhaps be noted that a ‘text’ can IJSER International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 2135",
"title": ""
},
{
"docid": "0da2484d00456618806d67aabc7e97d2",
"text": "Students’ academic performance is critical for educational institutions because strategic programs can be planned in improving or maintaining students’ performance during their period of studies in the institutions. The academic performance in this study is measured by their cumulative grade point average (CGPA) upon graduating. This study presents the work of data mining in predicting the drop out feature of students. This study applies decision tree technique to choose the best prediction and analysis. The list of students who are predicted as likely to drop out from college by data mining is then turned over to teachers and management for direct or indirect intervention. KeywordsIntruder; hacker; cracker; Intrusion detection; anomaly detection; verification; validation.",
"title": ""
}
] |
scidocsrr
|
d28433f13403045ee842ad1045f3a49a
|
Asymmetric Algorithms and Symmetric Algorithms: A Review
|
[
{
"docid": "fe944f1845eca3b0c252ada2c0306d61",
"text": "Now a days sharing the information over internet is becoming a critical issue due to security problems. Hence more techniques are needed to protect the shared data in an unsecured channel. The present work focus on combination of cryptography and steganography to secure the data while transmitting in the network. Firstly the data which is to be transmitted from sender to receiver in the network must be encrypted using the encrypted algorithm in cryptography .Secondly the encrypted data must be hidden in an image or video or an audio file with help of steganographic algorithm. Thirdly by using decryption technique the receiver can view the original data from the hidden image or video or audio file. Transmitting data or document can be done through these ways will be secured. In this paper we implemented three encrypt techniques like DES, AES and RSA algorithm along with steganographic algorithm like LSB substitution technique and compared their performance of encrypt techniques based on the analysis of its stimulated time at the time of encryption and decryption process and also its buffer size experimentally. The entire process has done in C#.",
"title": ""
}
] |
[
{
"docid": "544feea3dbdbd764cd2bba60ac1c9c93",
"text": "Scholars in many disciplines have considered the antecedents and consequences of various forms of trust. This paper generates 11 propositions exploring the relationship between Human Resource Information Systems (HRIS) and the trust an individual places in the inanimate technology (technology trust) and models the effect of those relationships on HRIS implementation success. Specifically, organizational, technological, and user factors are considered and modeled to generate a set of testable propositions that can subsequently be investigated in various organizational settings. Eleven propositions are offered suggesting that organizational trust, pooled interdependence, organizational community, organizational culture, technology adoption, technology utility, technology usability, socialization, sensitivity to privacy, and predisposition to trust influence an individual’s level of trust in the HRIS technology (technology trust) and ultimately the success of an HRIS implementation process. A summary of the relationships between the key constructs in the model and recommendations for future research are provided.",
"title": ""
},
{
"docid": "e292d4af3c77a11e8e2013fca0c8fb04",
"text": "We present in this paper experiments on Table Recognition in hand-written register books. We first explain how the problem of row and column detection is modelled, and then compare two Machine Learning approaches (Conditional Random Field and Graph Convolutional Network) for detecting these table elements. Evaluation was conducted on death records provided by the Archives of the Diocese of Passau. With an F-1 score of 89, both methods provide a quality which allows for Information Extraction. Software and dataset are open source/data.",
"title": ""
},
{
"docid": "98b32860be2e016d20a49994de4149f1",
"text": "This paper presents a method for optimizing software testing efficiency by identifying the most critical path clusters in a program. We do this by developing variable length Genetic Algorithms that optimize and select the software path clusters which are weighted in accordance with the criticality of the path. Exhaustive software testing is rarely possible because it becomes intractable for even medium sized software. Typically only parts of a program can be tested, but these parts are not necessarily the most error prone. Therefore, we are developing a more selective approach to testing by focusing on those parts that are most critical so that these paths can be tested first. By identifying the most critical paths, the testing efficiency can be increased.",
"title": ""
},
{
"docid": "cf264a124cc9f68cf64cacb436b64fa3",
"text": "Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a detailed study of 11 widely used internal clustering validation measures for crisp clustering. From five conventional aspects of clustering, we investigate their validation properties. Experiment results show that S\\_Dbw is the only internal validation measure which performs well in all five aspects, while other measures have certain limitations in different application scenarios.",
"title": ""
},
{
"docid": "cf43e30eab17189715b085a6e438ea7d",
"text": "This paper presents our investigation of non-orthogonal multiple access (NOMA) as a novel and promising power-domain user multiplexing scheme for future radio access. Based on information theory, we can expect that NOMA with a successive interference canceller (SIC) applied to the receiver side will offer a better tradeoff between system efficiency and user fairness than orthogonal multiple access (OMA), which is widely used in 3.9 and 4G mobile communication systems. This improvement becomes especially significant when the channel conditions among the non-orthogonally multiplexed users are significantly different. Thus, NOMA can be expected to efficiently exploit the near-far effect experienced in cellular environments. In this paper, we describe the basic principle of NOMA in both the downlink and uplink and then present our proposed NOMA scheme for the scenario where the base station is equipped with multiple antennas. Simulation results show the potential system-level throughput gains of NOMA relative to OMA. key words: cellular system, non-orthogonal multiple access, superposition coding, successive interference cancellation",
"title": ""
},
{
"docid": "5e8014d1985991e21f6f985569e6ef91",
"text": "Marie Evans Schmidt and Elizabeth Vandewater review research on links between various types of electronic media and the cognitive skills of school-aged children and adolescents. One central finding of studies to date, they say, is that the content delivered by electronic media is far more influential than the media themselves. Most studies, they point out, find a small negative link between the total hours a child spends viewing TV and that child's academic achievement. But when researchers take into account characteristics of the child, such as IQ or socioeconomic status, this link typically disappears. Content appears to be crucial. Viewing educational TV is linked positively with academic achievement; viewing entertainment TV is linked negatively with achievement. When it comes to particular cognitive skills, say the authors, researchers have found that electronic media, particularly video games, can enhance visual spatial skills, such as visual tracking, mental rotation, and target localization. Gaming may also improve problem-solving skills. Researchers have yet to understand fully the issue of transfer of learning from electronic media. Studies suggest that, under some circumstances, young people are able to transfer what they learn from electronic media to other applications, but analysts are uncertain how such transfer occurs. In response to growing public concern about possible links between electronic media use and attention problems in children and adolescents, say the authors, researchers have found evidence for small positive links between heavy electronic media use and mild attention problems among young people but have found only inconsistent evidence so far for a link between attention deficit hyperactivity disorder and media use. The authors point out that although video games, interactive websites, and multimedia software programs appear to offer a variety of possible benefits for learning, there is as yet little empirical evidence to suggest that such media are more effective than other forms of instruction.",
"title": ""
},
{
"docid": "9096c5bfe44df6dc32641b8f5370d8d0",
"text": "This paper presents a nonintrusive prototype computer vision system for monitoring a driver's vigilance in real time. It is based on a hardware system for the real-time acquisition of a driver's images using an active IR illuminator and the software implementation for monitoring some visual behaviors that characterize a driver's level of vigilance. Six parameters are calculated: Percent eye closure (PERCLOS), eye closure duration, blink frequency, nodding frequency, face position, and fixed gaze. These parameters are combined using a fuzzy classifier to infer the level of inattentiveness of the driver. The use of multiple visual parameters and the fusion of these parameters yield a more robust and accurate inattention characterization than by using a single parameter. The system has been tested with different sequences recorded in night and day driving conditions in a motorway and with different users. Some experimental results and conclusions about the performance of the system are presented",
"title": ""
},
{
"docid": "d0778852e57dddf8a454dd609908ff87",
"text": "Abstract: Trivariate barycentric coordinates can be used both to express a point inside a tetrahedron as a convex combination of the four vertices and to linearly interpolate data given at the vertices. In this paper we generalize these coordinates to convex polyhedra and the kernels of star-shaped polyhedra. These coordinates generalize in a natural way a recently constructed set of coordinates for planar polygons, called mean value coordinates.",
"title": ""
},
{
"docid": "93ec9adabca7fac208a68d277040c254",
"text": "UNLABELLED\nWe developed cyNeo4j, a Cytoscape App to link Cytoscape and Neo4j databases to utilize the performance and storage capacities Neo4j offers. We implemented a Neo4j NetworkAnalyzer, ForceAtlas2 layout and Cypher component to demonstrate the possibilities a distributed setup of Cytoscape and Neo4j have.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe app is available from the Cytoscape App Store at http://apps.cytoscape.org/apps/cyneo4j, the Neo4j plugins at www.github.com/gsummer/cyneo4j-parent and the community and commercial editions of Neo4j can be found at http://www.neo4j.com.\n\n\nCONTACT\[email protected].",
"title": ""
},
{
"docid": "d3cc065dd9212cc351662c51bd5f2284",
"text": "Human activities comprise several sub-activities performed in a sequence and involve interactions with various objects. This makes reasoning about the object affordances a central task for activity recognition. In this work, we consider the problem of jointly labeling the object affordances and human activities from RGB-D videos. We frame the problem as a Markov Random Field where the nodes represent objects and sub-activities, and the edges represent the relationships between object affordances, their relations with sub-activities, and their evolution over time. We formulate the learning problem using a structural SVM approach, where labeling over various alternate temporal segmentations are considered as latent variables. We tested our method on a dataset comprising 120 activity videos collected from four subjects, and obtained an end-to-end precision of 81.8% and recall of 80.0% for labeling the activities.",
"title": ""
},
{
"docid": "90cafc449ebe112a022715f7b6845ba9",
"text": "Deep neural nets have caused a revolution in many classification tasks. A related ongoing revolution—also theoretically not understood—concerns their ability to serve as generative models for complicated types of data such as images and texts. These models are trained using ideas like variational autoencoders and Generative Adversarial Networks. We take a first cut at explaining the expressivity of multilayer nets by giving a sufficient criterion for a function to be approximable by a neural network with n hidden layers. A key ingredient is Barron’s Theorem [Bar93], which gives a Fourier criterion for approximability of a function by a neural network with 1 hidden layer. We show that a composition of n functions which satisfy certain Fourier conditions (“Barron functions”) can be approximated by a n+ 1layer neural network. For probability distributions, this translates into a criterion for a probability distribution to be approximable in Wasserstein distance—a natural metric on probability distributions—by a neural network applied to a fixed base distribution (e.g., multivariate gaussian). Building up recent lower bound work, we also give an example function that shows that composition of Barron functions is more expressive than Barron functions alone.",
"title": ""
},
{
"docid": "23159d5a2ddda7d83ea4befa808f1af4",
"text": "We investigate potential benefits of employing Design Structure Matrix (DSM) in the context of Model-Based Systems Engineering (MBSE) for the purposes of analyzing and improving the design of a product-project ensemble. Focusing on process DSM, we present an algorithm for bidirectional transformation frame between a product-project system model and its corresponding Model-Based DSM (MDSM). Using Object-Process Methodology (OPM) as the underlying modeling language, we examine and characterize useful and insightful relationships between the system model and its MDSM. An unmanned aerial vehicle case study demonstrates the semantics of and analogy between various types of relationships as they are reflected in both the OPM system model and the MDSM derived from it. Finally, we conclude with further research direction on showing how clustering of DSM processes can be reflected back as an improvement of the OPM model.",
"title": ""
},
{
"docid": "31512e01cebd226da8db288ecf6869c5",
"text": "In recent years, deep learning has shown performance breakthroughs in many applications, such as image detection, image segmentation, pose estimation, and speech recognition. It was also applied successfully to malware detection. However, this comes with a major concern: deep networks have been found to be vulnerable to adversarial examples. So far successful attacks have been proved to be very effective especially in the domains of images and speech, where small perturbations to the input signal do not change how it is perceived by humans but greatly affect the classification of the model under attack. Our goal is to modify a malicious binary so it would be detected as benign while preserving its original functionality. In contrast to images or speech, small modifications to bytes of the binary lead to significant changes in the functionality. We introduce a novel approach to generating adversarial example for attacking a whole-binary malware detector. We append to the binary file a small section, which contains a selected sequence of bytes that steers the prediction of the network from malicious to be benign with high confidence. We applied this approach to a CNNbased malware detection model and showed extremely high rates of success in the attack.",
"title": ""
},
{
"docid": "372fa95863cf20fdcb632d033cb4d944",
"text": "Traditional approaches for color propagation in videos rely on some form of matching between consecutive video frames. Using appearance descriptors, colors are then propagated both spatially and temporally. These methods, however, are computationally expensive and do not take advantage of semantic information of the scene. In this work we propose a deep learning framework for color propagation that combines a local strategy, to propagate colors frame-by-frame ensuring temporal stability, and a global strategy, using semantics for color propagation within a longer range. Our evaluation shows the superiority of our strategy over existing video and image color propagation methods as well as neural photo-realistic style transfer approaches.",
"title": ""
},
{
"docid": "e4e97569f53ddde763f4f28559c96ba6",
"text": "With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.",
"title": ""
},
{
"docid": "456d376029d594170c81dbe455a4086a",
"text": "Long range, low power networks are rapidly gaining acceptance in the Internet of Things (IoT) due to their ability to economically support long-range sensing and control applications while providing multi-year battery life. LoRa is a key example of this new class of network and is being deployed at large scale in several countries worldwide. As these networks move out of the lab and into the real world, they expose a large cyber-physical attack surface. Securing these networks is therefore both critical and urgent. This paper highlights security issues in LoRa and LoRaWAN that arise due to the choice of a robust but slow modulation type in the protocol. We exploit these issues to develop a suite of practical attacks based around selective jamming. These attacks are conducted and evaluated using commodity hardware. The paper concludes by suggesting a range of countermeasures that can be used to mitigate the attacks.",
"title": ""
},
{
"docid": "c2e92f8289ebf50ca363840133dc2a43",
"text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.08.042 ⇑ Address: WOLNM & ESIME Zacatenco, Instituto Politécnico Nacional, U. Profesional Adolfo López Mateos, Edificio Z-4, 2do piso, cubiculo 6, Miguel Othón de Mendizábal S/N, La Escalera, Gustavo A. Madero, D.F., C.P. 07320, Mexico. Tel.: +52 55 5694 0916/+52 55 5454 2611 (cellular); fax: +52 55 5694 0916. E-mail address: [email protected] URL: http://www.wolnm.org/apa 1 AIWBES: adaptive and intelligent web-based educational systems; BKT: Bayesian knowledge tracing; CBES: computer-based educational systems; CBIS: computerbased information system,; DM: data mining; DP: dynamic programming; EDM: educational data mining; EM: expectation maximization; HMM: hidden Markov model; IBL: instances-based learning; IRT: item response theory; ITS: intelligent tutoring systems; KDD: knowledge discovery in databases; KT: knowledge tracing; LMS: learning management systems; SNA: social network analysis; SWOT: strengths, weakness, opportunities, and threats; WBC: web-based courses; WBES: web-based educational systems. Alejandro Peña-Ayala ⇑",
"title": ""
},
{
"docid": "a92aa1ea6faf19a2257dce1dda9cd0d0",
"text": "This paper introduces a novel content-adaptive image downscaling method. The key idea is to optimize the shape and locations of the downsampling kernels to better align with local image features. Our content-adaptive kernels are formed as a bilateral combination of two Gaussian kernels defined over space and color, respectively. This yields a continuum ranging from smoothing to edge/detail preserving kernels driven by image content. We optimize these kernels to represent the input image well, by finding an output image from which the input can be well reconstructed. This is technically realized as an iterative maximum-likelihood optimization using a constrained variation of the Expectation-Maximization algorithm. In comparison to previous downscaling algorithms, our results remain crisper without suffering from ringing artifacts. Besides natural images, our algorithm is also effective for creating pixel art images from vector graphics inputs, due to its ability to keep linear features sharp and connected.",
"title": ""
},
{
"docid": "2b8ca8be8d5e468d4cd285ecc726eceb",
"text": "These days, large-scale graph processing becomes more and more important. Pregel, inspired by Bulk Synchronous Parallel, is one of the highly used systems to process large-scale graph problems. In Pregel, each vertex executes a function and waits for a superstep to communicate its data to other vertices. Superstep is a very time-consuming operation, used by Pregel, to synchronize distributed computations in a cluster of computers. However, it may become a bottleneck when the number of communications increases in a graph with million vertices. Superstep works like a barrier in Pregel that increases the side effect of skew problem in distributed computing environment. ExPregel is a Pregel-like model that is designed to reduce the number of communication messages between two vertices resided on two different computational nodes. We have proven that ExPregel reduces the number of exchanged messages as well as the number of supersteps for all graph topologies. Enhancing parallelism in our new computational model is another important feature that manifolds the speed of graph analysis programs. More interestingly, ExPregel uses the same model of programming as Pregel. Our experiments on large-scale real-world graphs show that ExPregel can reduce network traffic as well as number of supersteps from 45% to 96%. Runtime speed up in the proposed model varies from 1.2× to 30×. Copyright © 2015 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "69871f7730ce78129cb07b029151de48",
"text": "Biological signal processing offers an alternative to improve life quality in handicapped patients. In this sense is possible, to control devices as wheel chairs or computer systems. The signals that are usually used are EMG, EOG and EEG. When the lost of ability is severe the use of EMG signals is not possible because the person had lost, as in the case of ALS patients, the ability to control his body. EOG offers low resolution because the technique depends of many external and uncontrollable variables of the environment. This work shows the design of a set of algorithms capable to classify brain signals related to imaginary motor activities ( left and right hand imaginary). First, digital signal processing is used to select and extract discriminant features, using parametrical methods for the estimation of the power spectral density and the Fisher criterion for separability. The signal is then classified, using linear discriminant analysis. The results show that is possible to obtain good performance with error rates as low as 13% and that the use of parametrical methods for Spectral Power Density estimation can improve the accuracy of the Brain Computer Interface.",
"title": ""
}
] |
scidocsrr
|
80df194bf7f0aedd9a14fb55de2b3856
|
The Body and the Beautiful: Health, Attractiveness and Body Composition in Men’s and Women’s Bodies
|
[
{
"docid": "6210a0a93b97a12c2062ac78953f3bd1",
"text": "This article proposes a contextual-evolutionary theory of human mating strategies. Both men and women are hypothesized to have evolved distinct psychological mechanisms that underlie short-term and long-term strategies. Men and women confront different adaptive problems in short-term as opposed to long-term mating contexts. Consequently, different mate preferences become activated from their strategic repertoires. Nine key hypotheses and 22 predictions from Sexual Strategies Theory are outlined and tested empirically. Adaptive problems sensitive to context include sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment. Discussion summarizes 6 additional sources of behavioral data, outlines adaptive problems common to both sexes, and suggests additional contexts likely to cause shifts in mating strategy.",
"title": ""
}
] |
[
{
"docid": "dabbcd5d79b011b7d091ef3a471d9779",
"text": "This paper borrows ideas from social science to inform the design of novel \"sensing\" user-interfaces for computing technology. Specifically, we present five design challenges inspired by analysis of human-human communication that are mundanely addressed by traditional graphical user interface designs (GUIs). Although classic GUI conventions allow us to finesse these questions, recent research into innovative interaction techniques such as 'Ubiquitous Computing' and 'Tangible Interfaces' has begun to expose the interaction challenges and problems they pose. By making them explicit we open a discourse on how an approach similar to that used by social scientists in studying human-human interaction might inform the design of novel interaction mechanisms that can be used to handle human-computer communication accomplishments",
"title": ""
},
{
"docid": "9d2ec490b7efb23909abdbf5f209f508",
"text": "Terrestrial Laser scanner (TLS) has been widely used in our recent architectural heritage projects and huge quantity of point cloud data was gotten. In order to process the huge quantity of point cloud data effectively and reconstruct their 3D models, more effective methods should be developed based on existing automatic or semiautomatic point cloud processing algorithms. Here introduce a new algorithm for rapid extracting the pillar features of Chinese ancient buildings from their point cloud data, the algorithm has the least human interaction in the data processing and is more efficient to extract pillars from point cloud data than existing feature extracting algorithms. With this algorithm we identify the pillar features by dividing the point cloud into slices firstly, and then get the projective parameters of pillar objects in selected slices, the next compare the local projective parameters in adjacent slices, the next combine them to get the global parameters of the pillars and at last reconstruct the 3d pillar models.",
"title": ""
},
{
"docid": "bd3717bd46869b9be3153478cbd19f2a",
"text": "The study was conducted to assess the effectiveness of jasmine oil massage on labour pain during first stage of labour among 40 primigravida women. The study design adopted was true experimental approach with pre-test post-test control group design. The demographic Proforma were collected from the women by interview and Visual analogue scale was used to measure the level of labour pain in both the groups. Data obtained in these areas were analysed by descriptive and inferential statistics. A significant difference was found in the experimental group( t 9.869 , p<0.05) . A significant difference was found between experimental group and control group. cal",
"title": ""
},
{
"docid": "4bd123c2c44e703133e9a6093170db39",
"text": "This paper presents a single-phase cascaded H-bridge converter for a grid-connected photovoltaic (PV) application. The multilevel topology consists of several H-bridge cells connected in series, each one connected to a string of PV modules. The adopted control scheme permits the independent control of each dc-link voltage, enabling, in this way, the tracking of the maximum power point for each string of PV panels. Additionally, low-ripple sinusoidal-current waveforms are generated with almost unity power factor. The topology offers other advantages such as the operation at lower switching frequency or lower current ripple compared to standard two-level topologies. Simulation and experimental results are presented for different operating conditions.",
"title": ""
},
{
"docid": "e637dc1aee0632f61a29c8609187a98b",
"text": "Scene coordinate regression has become an essential part of current camera re-localization methods. Different versions, such as regression forests and deep learning methods, have been successfully applied to estimate the corresponding camera pose given a single input image. In this work, we propose to regress the scene coordinates pixel-wise for a given RGB image by using deep learning. Compared to the recent methods, which usually employ RANSAC to obtain a robust pose estimate from the established point correspondences, we propose to regress confidences of these correspondences, which allows us to immediately discard erroneous predictions and improve the initial pose estimates. Finally, the resulting confidences can be used to score initial pose hypothesis and aid in pose refinement, offering a generalized solution to solve this task.",
"title": ""
},
{
"docid": "7ce9ef05d3f4a92f6b187d7986b70be1",
"text": "With the growth in the consumer electronics industry, it is vital to develop an algorithm for ultrahigh definition products that is more effective and has lower time complexity. Image interpolation, which is based on an autoregressive model, has achieved significant improvements compared with the traditional algorithm with respect to image reconstruction, including a better peak signal-to-noise ratio (PSNR) and improved subjective visual quality of the reconstructed image. However, the time-consuming computation involved has become a bottleneck in those autoregressive algorithms. Because of the high time cost, image autoregressive-based interpolation algorithms are rarely used in industry for actual production. In this study, in order to meet the requirements of real-time reconstruction, we use diverse compute unified device architecture (CUDA) optimization strategies to make full use of the graphics processing unit (GPU) (NVIDIA Tesla K80), including a shared memory and register and multi-GPU optimization. To be more suitable for the GPU-parallel optimization, we modify the training window to obtain a more concise matrix operation. Experimental results show that, while maintaining a high PSNR and subjective visual quality and taking into account the I/O transfer time, our algorithm achieves a high speedup of 147.3 times for a Lena image and 174.8 times for a 720p video, compared to the original single-threaded C CPU code with -O2 compiling optimization.",
"title": ""
},
{
"docid": "a8d6a864092b3deb58be27f0f76b02c2",
"text": "High-quality word representations have been very successful in recent years at improving performance across a variety of NLP tasks. These word representations are the mappings of each word in the vocabulary to a real vector in the Euclidean space. Besides high performance on specific tasks, learned word representations have been shown to perform well on establishing linear relationships among words. The recently introduced skipgram model improved performance on unsupervised learning of word embeddings that contains rich syntactic and semantic word relations both in terms of accuracy and speed. Word embeddings that have been used frequently on English language, is not applied to Turkish yet. In this paper, we apply the skip-gram model to a large Turkish text corpus and measured the performance of them quantitatively with the \"question\" sets that we generated. The learned word embeddings and the question sets are publicly available at our website. Keywords—Word embeddings, Natural Language Processing, Deep Learning",
"title": ""
},
{
"docid": "67a3f92ab8c5a6379a30158bb9905276",
"text": "We present a compendium of recent and current projects that utilize crowdsourcing technologies for language studies, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior. While crowdsourcing has primarily been used for annotation in recent language studies, the results here demonstrate that far richer data may be generated in a range of linguistic disciplines from semantics to psycholinguistics. For these, we report a number of successful methods for evaluating data quality in the absence of a ‘correct’ response for any given data point.",
"title": ""
},
{
"docid": "41d32df9d58f9c38f75010c87c0c3327",
"text": "Evidence from many countries in recent years suggests that collateral values and recovery rates on corporate defaults can be volatile and, moreover, that they tend to go down just when the number of defaults goes up in economic downturns. This link between recovery rates and default rates has traditionally been neglected by credit risk models, as most of them focused on default risk and adopted static loss assumptions, treating the recovery rate either as a constant parameter or as a stochastic variable independent from the probability of default. This traditional focus on default analysis has been partly reversed by the recent significant increase in the number of studies dedicated to the subject of recovery rate estimation and the relationship between default and recovery rates. This paper presents a detailed review of the way credit risk models, developed during the last thirty years, treat the recovery rate and, more specifically, its relationship with the probability of default of an obligor. We also review the efforts by rating agencies to formally incorporate recovery ratings into their assessment of corporate loan and bond credit risk and the recent efforts by the Basel Committee on Banking Supervision to consider “downturn LGD” in their suggested requirements under Basel II. Recent empirical evidence concerning these issues and the latest data on high-yield bond and leverage loan defaults is also presented and discussed.",
"title": ""
},
{
"docid": "db36273a3669e1aeda1bf2c5ab751387",
"text": "Autonomous Ground Vehicles designed for dynamic environments require a reliable perception of the real world, in terms of obstacle presence, position and speed. In this paper we present a flexible technique to build, in real time, a dense voxel-based map from a 3D point cloud, able to: (1) discriminate between stationary and moving obstacles; (2) provide an approximation of the detected obstacle's absolute speed using the information of the vehicle's egomotion computed through a visual odometry approach. The point cloud is first sampled into a full 3D map based on voxels to preserve the tridimensional information; egomotion information allows computational efficiency in voxels creation; then voxels are processed using a flood fill approach to segment them into a clusters structure; finally, with the egomotion information, the obtained clusters are labeled as stationary or moving obstacles, and an estimation of their speed is provided. The algorithm runs in real time; it has been tested on one of VisLab's AGVs using a modified SGM-based stereo system as 3D data source.",
"title": ""
},
{
"docid": "01962e512740addbe5f444ed581ebb48",
"text": "We investigate how neural, encoder-decoder translation systems output target strings of appropriate lengths, finding that a collection of hidden units learns to explicitly implement this functionality.",
"title": ""
},
{
"docid": "262c11ab9f78e5b3f43a31ad22cf23c5",
"text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.",
"title": ""
},
{
"docid": "1a0ed30b64fa7f8d39a12acfcadfd763",
"text": "This letter presents a smart shelf configuration for radio frequency identification (RFID) application. The proposed shelf has an embedded leaking microstrip transmission line with extended ground plane. This structure, when connected to an RFID reader, allows detecting tagged objects in close proximity with proper field confinement to avoid undesired reading of neighboring shelves. The working frequency band covers simultaneously the three world assigned RFID subbands at ultrahigh frequency (UHF). The concept is explored by full-wave simulations and it is validated with thorough experimental tests.",
"title": ""
},
{
"docid": "ff8089430cdae3e733b06a7aa1b759b4",
"text": "We derive a model for consumer loan default and credit card expenditure. The default model is based on statistical models for discrete choice, in contrast to the usual procedure of linear discriminant analysis. The model is then extended to incorporate the default probability in a model of expected profit. The technique is applied to a large sample of applications and expenditure from a major credit card company. The nature of the data mandates the use of models of sample selection for estimation. The empirical model for expected profit produces an optimal acceptance rate for card applications which is far higher than the observed rate used by the credit card vendor based on the discriminant analysis. I am grateful to Terry Seaks for valuable comments on an earlier draft of this paper and to Jingbin Cao for his able research assistance. The provider of the data and support for this project has requested anonymity, so I must thank them as such. Their help and support are gratefully acknowledged. Participants in the applied econometrics workshop at New York University also provided useful commentary.",
"title": ""
},
{
"docid": "fb2287cb1c41441049288335f10fd473",
"text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly",
"title": ""
},
{
"docid": "92da117d31574246744173b339b0d055",
"text": "We present a method for gesture detection and localization based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at two temporal scales. Key to our technique is a training strategy which exploits i) careful initialization of individual modalities; and ii) gradual fusion of modalities from strongest to weakest cross-modality structure. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams.",
"title": ""
},
{
"docid": "bf294a4c3af59162b2f401e2cdcb060b",
"text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.",
"title": ""
},
{
"docid": "10318d39b3ad18779accbf29b2f00fcd",
"text": "Designing convolutional neural networks (CNN) models for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant effort has been dedicated to design and improve mobile models on all three dimensions, it is challenging to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated neural architecture search approach for designing resourceconstrained mobile CNN models. We propose to explicitly incorporate latency information into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike in previous work, where mobile latency is considered via another, often inaccurate proxy (e.g., FLOPS), in our experiments, we directly measure real-world inference latency by executing the model on a particular platform, e.g., Pixel phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that permits layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our model achieves 74.0% top-1 accuracy with 76ms latency on a Pixel phone, which is 1.5× faster than MobileNetV2 (Sandler et al. 2018) and 2.4× faster than NASNet (Zoph et al. 2018) with the same top-1 accuracy. On the COCO object detection task, our model family achieves both higher mAP quality and lower latency than MobileNets.",
"title": ""
},
{
"docid": "f6a9670544a784a5fc431746557473a3",
"text": "Massive multiple-input multiple-output (MIMO) systems are cellular networks where the base stations (BSs) are equipped with unconventionally many antennas, deployed on co-located or distributed arrays. Huge spatial degrees-of-freedom are achieved by coherent processing over these massive arrays, which provide strong signal gains, resilience to imperfect channel knowledge, and low interference. This comes at the price of more infrastructure; the hardware cost and circuit power consumption scale linearly/affinely with the number of BS antennas N. Hence, the key to cost-efficient deployment of large arrays is low-cost antenna branches with low circuit power, in contrast to today's conventional expensive and power-hungry BS antenna branches. Such low-cost transceivers are prone to hardware imperfections, but it has been conjectured that the huge degrees-of-freedom would bring robustness to such imperfections. We prove this claim for a generalized uplink system with multiplicative phase-drifts, additive distortion noise, and noise amplification. Specifically, we derive closed-form expressions for the user rates and a scaling law that shows how fast the hardware imperfections can increase with N while maintaining high rates. The connection between this scaling law and the power consumption of different transceiver circuits is rigorously exemplified. This reveals that one can make √N the circuit power increase as N, instead of linearly, by careful circuit-aware system design.",
"title": ""
},
{
"docid": "fa20b9427a8dcfd8db90e0a6eb5e7d8c",
"text": "Recent functional brain imaging studies suggest that object concepts may be represented, in part, by distributed networks of discrete cortical regions that parallel the organization of sensory and motor systems. In addition, different regions of the left lateral prefrontal cortex, and perhaps anterior temporal cortex, may have distinct roles in retrieving, maintaining and selecting semantic information.",
"title": ""
}
] |
scidocsrr
|
be1251672e2ef44c457d70a7d89cb546
|
Understanding MOOC students: motivations and behaviours indicative of MOOC completion
|
[
{
"docid": "a7eff25c60f759f15b41c85ac5e3624f",
"text": "Connectivist massive open online courses (cMOOCs) represent an important new pedagogical approach ideally suited to the network age. However, little is known about how the learning experience afforded by cMOOCs is suited to learners with different skills, motivations, and dispositions. In this study, semi-structured interviews were conducted with 29 participants on the Change11 cMOOC. These accounts were analyzed to determine patterns of engagement and factors affecting engagement in the course. Three distinct types of engagement were recognized – active participation, passive participation, and lurking. In addition, a number of key factors that mediated engagement were identified including confidence, prior experience, and motivation. This study adds to the overall understanding of learning in cMOOCs and provides additional empirical data to a nascent research field. The findings provide an insight into how the learning experience afforded by cMOOCs suits the diverse range of learners that may coexist within a cMOOC. These insights can be used by designers of future cMOOCs to tailor the learning experience to suit the diverse range of learners that may choose to learn in this way.",
"title": ""
}
] |
[
{
"docid": "b80ab14d0908a2a66a4c5a020860a6ac",
"text": "We evaluate U.S. firms’ leverage determinants by studying how 1,801 firms paid for 2,073 very large investments during the period 1989-2006. This approach complements existing empirical work on capital structure, which typically estimates regression models for a broad set of CRSP/Compustat firms. If firms making large investments generally raise new external funds, their securities issuances should provide information about managers’ attitudes toward leverage. Our data indicate that large investments are mostly externally financed and that firms issue securities that tend to move them quite substantially toward target debt ratios. Firms also tend to issue more equity following a share price runup or when the market-to-book ratio is high. We find little support for the standard pecking order hypothesis.",
"title": ""
},
{
"docid": "e53c7f8890d3bf49272e08d4446703a4",
"text": "In orthogonal frequency-division multiplexing (OFDM) systems, it is generally assumed that the channel response is static in an OFDM symbol period. However, the assumption does not hold in high-mobility environments. As a result, intercarrier interference (ICI) is induced, and system performance is degraded. A simple remedy for this problem is the application of the zero-forcing (ZF) equalizer. Unfortunately, the direct ZF method requires the inversion of an N times N ICI matrix, where N is the number of subcarriers. When N is large, the computational complexity can become prohibitively high. In this paper, we first propose a low-complexity ZF method to solve the problem in single-input-single-output (SISO) OFDM systems. The main idea is to explore the special structure inherent in the ICI matrix and apply Newton's iteration for matrix inversion. With our formulation, fast Fourier transforms (FFTs) can be used in the iterative process, reducing the complexity from O (N3) to O (N log2 N). Another feature of the proposed algorithm is that it can converge very fast, typically in one or two iterations. We also analyze the convergence behavior of the proposed method and derive the theoretical output signal-to-interference-plus-noise ratio (SINR). For a multiple-input-multiple-output (MIMO) OFDM system, the complexity of the ZF method becomes more intractable. We then extend the method proposed for SISO-OFDM systems to MIMO-OFDM systems. It can be shown that the computational complexity can be reduced even more significantly. Simulations show that the proposed methods perform almost as well as the direct ZF method, while the required computational complexity is reduced dramatically.",
"title": ""
},
{
"docid": "0cc16f8fe35cbf169de8263236d08166",
"text": "In this paper, we revisit a generally accepted opinion: implementing Elliptic Curve Cryptosystem (ECC) over GF (2) on sensor motes using small word size is not appropriate because XOR multiplication over GF (2) is not efficiently supported by current low-powered microprocessors. Although there are some implementations over GF (2) on sensor motes, their performances are not satisfactory enough to be used for wireless sensor networks (WSNs). We have found that a field multiplication over GF (2) are involved in a number of redundant memory accesses and its inefficiency is originated from this problem. Moreover, the field reduction process also requires many redundant memory accesses. Therefore, we propose some techniques for reducing unnecessary memory accesses. With the proposed strategies, the running time of field multiplication and reduction over GF (2) can be decreased by 21.1% and 24.7%, respectively. These savings noticeably decrease execution times spent in Elliptic Curve Digital Signature Algorithm (ECDSA) operations (signing and verification) by around 15% ∼ 19%. We present TinyECCK (Tiny Elliptic Curve Cryptosystem with Koblitz curve – a kind of TinyOS package supporting elliptic curve operations) which is the fastest ECC implementation over GF (2) on 8-bit sensor motes using ATmega128L as far as we know. Through comparisons with existing software implementations of ECC built in C or hybrid of C and inline assembly on sensor motes, we show that TinyECCK outperforms them in terms of running time, code size and supporting services. Furthermore, we show that a field multiplication over GF (2) can be faster than that over GF (p) on 8-bit ATmega128L processor by comparing TinyECCK with TinyECC, a well-known ECC implementation over GF (p). TinyECCK with sect163k1 can compute a scalar multiplication within 1.14 secs on a MICAz mote at the expense of 5,592-byte of ROM and 618-byte of RAM. Furthermore, it can also generate a signature and verify it in 1.37 and 2.32 secs with 13,748-byte of ROM and 1,004-byte of RAM. 2 Seog Chung Seo et al.",
"title": ""
},
{
"docid": "a0c1f145f423052b6e8059c5849d3e34",
"text": "Improved methods of assessment and research design have established a robust and causal association between stressful life events and major depressive episodes. The chapter reviews these developments briefly and attempts to identify gaps in the field and new directions in recent research. There are notable shortcomings in several important topics: measurement and evaluation of chronic stress and depression; exploration of potentially different processes of stress and depression associated with first-onset versus recurrent episodes; possible gender differences in exposure and reactivity to stressors; testing kindling/sensitization processes; longitudinal tests of diathesis-stress models; and understanding biological stress processes associated with naturally occurring stress and depressive outcomes. There is growing interest in moving away from unidirectional models of the stress-depression association, toward recognition of the effects of contexts and personal characteristics on the occurrence of stressors, and on the likelihood of progressive and dynamic relationships between stress and depression over time-including effects of childhood and lifetime stress exposure on later reactivity to stress.",
"title": ""
},
{
"docid": "f0eb42b522eadddaff7ebf479f791193",
"text": "High-density and low-leakage 1W1R 2-port (2P) SRAM is realized by 6T 1-port SRAM bitcell with double pumping internal clock in 16 nm FinFET technology. Proposed clock generator with address latch circuit enables robust timing design without sever setup/hold margin. We designed a 256 kb 1W1R 2P SRAM macro which achieves the highest density of 6.05 Mb/mm2. Measured data shows that a 313 ps of read-access-time is observed at 0.8 V. Standby leakage power in resume standby (RS) mode is reduced by 79% compared to the conventional dual-port SRAM without RS.",
"title": ""
},
{
"docid": "fddbcbdb0de1c7d49fe5545f3ab1bdfa",
"text": "Photovoltaic Systems (PVS) can be easily integrated in residential buildings hence they will be the main responsible of making low-voltage grid power flow bidirectional. Control issues on both the PV side and on the grid side have received much attention from manufacturers, competing for efficiency and low distortion and academia proposing new ideas soon become state-of-the-art. This paper aims at reviewing part of these topics (MPPT, current and voltage control) leaving to a future paper to complete the scenario. Implementation issues on Digital Signal Processor (DSP), the mandatory choice in this market segment, are discussed.",
"title": ""
},
{
"docid": "2639f5d735abed38ed4f7ebf11072087",
"text": "The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes. We propose a quantization scheme that allows inference to be carried out using integer-only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware. We also co-design a training procedure to preserve end-to-end model accuracy post quantization. As a result, the proposed quantization scheme improves the tradeoff between accuracy and on-device latency. The improvements are significant even on MobileNets, a model family known for run-time efficiency, and are demonstrated in ImageNet classification and COCO detection on popular CPUs.",
"title": ""
},
{
"docid": "c10ac9c3117627b2abb87e268f5de6b1",
"text": "Now days, the number of crime over children is increasing day by day. the implementation of School Security System(SSS) via RFID to avoid crime, illegal activates by students and reduce worries among parents. The project is the combination of latest Technology using RFID, GPS/GSM, image processing, WSN and web based development using Php,VB.net language apache web server and SQL. By using RFID technology it is easy track the student thus enhances the security and safety in selected zone. The information about student such as in time and out time from Bus and campus will be recorded to web based system and the GPS/GSM system automatically sends information (SMS / Phone Call) toothier parents. That the student arrived to Bus/Campus safely.",
"title": ""
},
{
"docid": "0a80057b2c43648e668809e185a68fe6",
"text": "A seminar that surveys state-of-the-art microprocessors offers an excellent forum for students to see how computer architecture techniques are employed in practice and for them to gain a detailed knowledge of the state of the art in microprocessor design. Princeton and the University of Virginia have developed such a seminar, organized around student presentations and a substantial research project. The course can accommodate a range of students, from advanced undergraduates to senior graduate students. The course can also be easily adapted to a survey of embedded processors. This paper describes the version taught at the University of Virginia and lessons learned from the experience.",
"title": ""
},
{
"docid": "5c7a66c440b73b9ff66cd73c8efb3718",
"text": "Image captioning is a crucial task in the interaction of computer vision and natural language processing. It is an important way that help human understand the world better. There are many studies on image English captioning, but little work on image Chinese captioning because of the lack of the corresponding datasets. This paper focuses on image Chinese captioning by using abundant English datasets for the issue. In this paper, a method of adding English information to image Chinese captioning is proposed. We validate the use of English information with state-of-the art performance on the datasets: Flickr8K-CN.",
"title": ""
},
{
"docid": "780e49047bdacda9862c51338aa1397f",
"text": "We consider stochastic volatility models under parameter uncertainty and investigate how model derived prices of European options are affected. We let the pricing parameters evolve dynamically in time within a specified region, and formalise the problem as a control problem where the control acts on the parameters to maximise/minimise the option value. Through a dual representation with backward stochastic differential equations, we obtain explicit equations for Heston’s model and investigate several numerical solutions thereof. In an empirical study, we apply our results to market data from the S&P 500 index where the model is estimated to historical asset prices. We find that the conservative model-prices cover 98% of the considered market-prices for a set of European call options.",
"title": ""
},
{
"docid": "cdd43b3baa9849441817b5f31d7cb0e0",
"text": "Traffic light control systems are widely used to monitor and control the flow of automobiles through the junction of many roads. They aim to realize smooth motion of cars in the transportation routes. However, the synchronization of multiple traffic light systems at adjacent intersections is a complicated problem given the various parameters involved. Conventional systems do not handle variable flows approaching the junctions. In addition, the mutual interference between adjacent traffic light systems, the disparity of cars flow with time, the accidents, the passage of emergency vehicles, and the pedestrian crossing are not implemented in the existing traffic system. This leads to traffic jam and congestion. We propose a system based on PIC microcontroller that evaluates the traffic density using IR sensors and accomplishes dynamic timing slots with different levels. Moreover, a portable controller device is designed to solve the problem of emergency vehicles stuck in the overcrowded roads.",
"title": ""
},
{
"docid": "3886cc26572b2d82c23790ad52342222",
"text": "This paper presents a quantitative human performance model of making single-stroke pen gestures within certain error constraints in terms of production time. Computed from the properties of Curves, Line segments, and Corners (CLC) in a gesture stroke, the model may serve as a foundation for the design and evaluation of existing and future gesture-based user interfaces at the basic motor control efficiency level, similar to the role of previous \"laws of action\" played to pointing, crossing or steering-based user interfaces. We report and discuss our experimental results on establishing and validating the CLC model, together with other basic empirical findings in stroke gesture production.",
"title": ""
},
{
"docid": "6346955de2fa46e5c109ada42b4e9f77",
"text": "Retinopathy of prematurity (ROP) is a disease that can cause blindness in very low birthweight infants. The incidence of ROP is closely correlated with the weight and the gestational age at birth. Despite current therapies, ROP continues to be a highly debilitating disease. Our advancing knowledge of the pathogenesis of ROP has encouraged investigations into new antivasculogenic therapies. The purpose of this article is to review the findings on the pathophysiological mechanisms that contribute to the transition between the first and second phases of ROP and to investigate new potential therapies. Oxygen has been well characterized for the key role that it plays in retinal neoangiogenesis. Low or high levels of pO2 regulate the normal or abnormal production of hypoxia-inducible factor 1 and vascular endothelial growth factors (VEGF), which are the predominant regulators of retinal angiogenesis. Although low oxygen saturation appears to reduce the risk of severe ROP when carefully controlled within the first few weeks of life, the optimal level of saturation still remains uncertain. IGF-1 and Epo are fundamentally required during both phases of ROP, as alterations in their protein levels can modulate disease progression. Therefore, rhIGF-1 and rhEpo were tested for their abilities to prevent the loss of vasculature during the first phase of ROP, whereas anti-VEGF drugs were tested during the second phase. At present, previous hypotheses concerning ROP should be amended with new pathogenetic theories. Studies on the role of genetic components, nitric oxide, adenosine, apelin and β-adrenergic receptor have revealed new possibilities for the treatment of ROP. The genetic hypothesis that single-nucleotide polymorphisms within the β-ARs play an active role in the pathogenesis of ROP suggests the concept of disease prevention using β-blockers. In conclusion, all factors that can mediate the progression from the avascular to the proliferative phase might have significant implications for the further understanding and treatment of ROP.",
"title": ""
},
{
"docid": "e6548454f46962b5ce4c5d4298deb8e7",
"text": "The use of SVM (Support Vector Machines) in detecting e-mail as spam or nonspam by incorporating feature selection using GA (Genetic Algorithm) is investigated. An GA approach is adopted to select features that are most favorable to SVM classifier, which is named as GA-SVM. Scaling factor is exploited to measure the relevant coefficients of feature to the classification task and is estimated by GA. Heavy-bias operator is introduced in GA to promote sparse in the scaling factors of features. So, feature selection is performed by eliminating irrelevant features whose scaling factor is zero. The experiment results on UCI Spam database show that comparing with original SVM classifier, the number of support vector decreases while better classification results are achieved based on GA-SVM.",
"title": ""
},
{
"docid": "e881c2ab6abc91aa8e7cbe54d861d36d",
"text": "Tracing traffic using commodity hardware in contemporary highspeed access or aggregation networks such as 10-Gigabit Ethernet is an increasingly common yet challenging task. In this paper we investigate if today’s commodity hardware and software is in principle able to capture traffic from a fully loaded Ethernet. We find that this is only possible for data rates up to 1 Gigabit/s without reverting to using special hardware due to, e. g., limitations with the current PC buses. Therefore, we propose a novel way for monitoring higher speed interfaces (e. g., 10-Gigabit) by distributing their traffic across a set of lower speed interfaces (e. g., 1-Gigabit). This opens the next question: which system configuration is capable of monitoring one such 1-Gigabit/s interface? To answer this question we present a methodology for evaluating the performance impact of different system components including different CPU architectures and different operating system. Our results indicate that the combination of AMD Opteron with FreeBSD outperforms all others, independently of running in singleor multi-processor mode. Moreover, the impact of packet filtering, running multiple capturing applications, adding per packet analysis load, saving the captured packets to disk, and using 64-bit OSes is investigated.",
"title": ""
},
{
"docid": "1f278ddc0d643196ff584c7ea82dc89b",
"text": "We consider an approximate version of a fundamental geometric search problem, polytope membership queries. Given a convex polytope P in REd, presented as the intersection of halfspaces, the objective is to preprocess P so that, given a query point q, it is possible to determine efficiently whether q lies inside P subject to an error bound ε. Previous solutions to this problem were based on straightforward applications of classic polytope approximation techniques by Dudley (1974) and Bentley et al. (1982). The former yields minimum storage, and the latter yields constant query time. A space-time tradeoff can be obtained by interpolating between the two. We present the first significant improvements to this tradeoff. For example, using the same storage as Dudley, we reduce the query time from O(1/ε(d-1)/2) to O(1/ε(d-1)/4). Our approach is based on a very simple algorithm. Both lower bounds and upper bounds on the performance of the algorithm are presented.\n To establish the relevance of our results, we introduce a reduction from approximate nearest neighbor searching to approximate polytope membership queries. We show that our tradeoff provides significant improvements to the best known space-time tradeoffs for approximate nearest neighbor searching. Furthermore, this is achieved with constructions that are much simpler than existing methods.",
"title": ""
},
{
"docid": "d2a1ecb8ad28ed5ba75460827341f741",
"text": "Most word representation methods assume that each word owns a single semantic vector. This is usually problematic because lexical ambiguity is ubiquitous, which is also the problem to be resolved by word sense disambiguation. In this paper, we present a unified model for joint word sense representation and disambiguation, which will assign distinct representations for each word sense.1 The basic idea is that both word sense representation (WSR) and word sense disambiguation (WSD) will benefit from each other: (1) highquality WSR will capture rich information about words and senses, which should be helpful for WSD, and (2) high-quality WSD will provide reliable disambiguated corpora for learning better sense representations. Experimental results show that, our model improves the performance of contextual word similarity compared to existing WSR methods, outperforms stateof-the-art supervised methods on domainspecific WSD, and achieves competitive performance on coarse-grained all-words WSD.",
"title": ""
},
{
"docid": "1043fd2e3eb677a768e922f5daf5a5d0",
"text": "A transformer magnetizing current offset for a phase-shift full-bridge (PSFB) converter is dealt in this paper. A model of this current offset is derived and it is presented as a first order system having a pole at a low frequency when the effects from the parasitic components and the switching transition are considered. A digital offset compensator eliminating this current offset is proposed and designed considering the interference in an output voltage regulation. The performances of the proposed compensator are verified by experiments with a 1.2kW PSFB converter. The saturation of the transformer is prevented by this compensator.",
"title": ""
}
] |
scidocsrr
|
0398d5cfcd43924eb95e0a856202be73
|
Microscopy cell counting and detection with fully convolutional regression networks
|
[
{
"docid": "2e7d42b44affb9fa1c12833ea8b00a96",
"text": "The objective of this work is human pose estimation in videos, where multiple frames are available. We investigate a ConvNet architecture that is able to benefit from temporal context by combining information across the multiple frames using optical flow. To this end we propose a network architecture with the following novelties: (i) a deeper network than previously investigated for regressing heatmaps, (ii) spatial fusion layers that learn an implicit spatial model, (iii) optical flow is used to align heatmap predictions from neighbouring frames, and (iv) a final parametric pooling layer which learns to combine the aligned heatmaps into a pooled confidence map. We show that this architecture outperforms a number of others, including one that uses optical flow solely at the input layers, one that regresses joint coordinates directly, and one that predicts heatmaps without spatial fusion. The new architecture outperforms the state of the art by a large margin on three video pose estimation datasets, including the very challenging Poses in the Wild dataset, and outperforms other deep methods that don't use a graphical model on the single-image FLIC benchmark (and also [5, 35] in the high precision region).",
"title": ""
}
] |
[
{
"docid": "01d34357d5b8dbf4b89d3f8683f6fc58",
"text": "Reinforcement learning (RL), while often powerful, can suffer from slow learning speeds, particularly in high dimensional spaces. The autonomous decomposition of tasks and use of hierarchical methods hold the potential to significantly speed up learning in such domains. This paper proposes a novel practical method that can autonomously decompose tasks, by leveraging association rule mining, which discovers hidden relationship among entities in data mining. We introduce a novel method called ARM-HSTRL (Association Rule Mining to extract Hierarchical Structure of Tasks in Reinforcement Learning). It extracts temporal and structural relationships of sub-goals in RL, and multi-task RL. In particular,it finds sub-goals and relationship among them. It is shown the significant efficiency and performance of the proposed method in two main topics of RL.",
"title": ""
},
{
"docid": "c65f050e911abb4b58b4e4f9b9aec63b",
"text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.",
"title": ""
},
{
"docid": "1e2006e93ad382b3997736e446c2dff2",
"text": "Classical distillation methods transfer representations from a “teacher” neural network to a “student” network by matching their output activations. Recent methods also match the Jacobians, or the gradient of output activations with the input. However, this involves making some ad hoc decisions, in particular, the choice of the loss function. In this paper, we first establish an equivalence between Jacobian matching and distillation with input noise, from which we derive appropriate loss functions for Jacobian matching. We then rely on this analysis to apply Jacobian matching to transfer learning by establishing equivalence of a recent transfer learning procedure to distillation. We then show experimentally on standard image datasets that Jacobian-based penalties improve distillation, robustness to noisy inputs, and transfer learning.",
"title": ""
},
{
"docid": "7d228b0da98868e92ab5ae13abddb29b",
"text": "An important challenge for human-like AI is compositional semantics. Recent research has attempted to address this by using deep neural networks to learn vector space embeddings of sentences, which then serve as input to other tasks. We present a new dataset for one such task, “natural language inference” (NLI), that cannot be solved using only word-level knowledge and requires some compositionality. We find that the performance of state of the art sentence embeddings (InferSent; Conneau et al., 2017) on our new dataset is poor. We analyze the decision rules learned by InferSent and find that they are consistent with simple heuristics that are ecologically valid in its training dataset. Further, we find that augmenting training with our dataset improves test performance on our dataset without loss of performance on the original training dataset. This highlights the importance of structured datasets in better understanding and improving AI systems.",
"title": ""
},
{
"docid": "cde4d7457b949420ab90bdc894f40eb0",
"text": "We study the problem of named entity recognition (NER) from electronic medical records, which is one of the most fundamental and critical problems for medical text mining. Medical records which are written by clinicians from different specialties usually contain quite different terminologies and writing styles. The difference of specialties and the cost of human annotation makes it particularly difficult to train a universal medical NER system. In this paper, we propose a labelaware double transfer learning framework (LaDTL) for cross-specialty NER, so that a medical NER system designed for one specialty could be conveniently applied to another one with minimal annotation efforts. The transferability is guaranteed by two components: (i) we propose label-aware MMD for feature representation transfer, and (ii) we perform parameter transfer with a theoretical upper bound which is also label aware. We conduct extensive experiments on 12 cross-specialty NER tasks. The experimental results demonstrate that La-DTL provides consistent accuracy improvement over strong baselines. Besides, the promising experimental results on non-medical NER scenarios indicate that LaDTL is potential to be seamlessly adapted to a wide range of NER tasks.",
"title": ""
},
{
"docid": "9ae29655fc75ad277fa541d0930d58bc",
"text": "Rapid and ongoing change creates novelty in ecosystems everywhere, both when comparing contemporary systems to their historical baselines, and predicted future systems to the present. However, the level of novelty varies greatly among places. Here we propose a formal and quantifiable definition of abiotic and biotic novelty in ecosystems, map abiotic novelty globally, and discuss the implications of novelty for the science of ecology and for biodiversity conservation. We define novelty as the degree of dissimilarity of a system, measured in one or more dimensions relative to a reference baseline, usually defined as either the present or a time window in the past. In this conceptualization, novelty varies in degree, it is multidimensional, can be measured, and requires a temporal and spatial reference. This definition moves beyond prior categorical definitions of novel ecosystems, and does not include human agency, self-perpetuation, or irreversibility as criteria. Our global assessment of novelty was based on abiotic factors (temperature, precipitation, and nitrogen deposition) plus human population, and shows that there are already large areas with high novelty today relative to the early 20th century, and that there will even be more such areas by 2050. Interestingly, the places that are most novel are often not the places where absolute changes are largest; highlighting that novelty is inherently different from change. For the ecological sciences, highly novel ecosystems present new opportunities to test ecological theories, but also challenge the predictive ability of ecological models and their validation. For biodiversity conservation, increasing novelty presents some opportunities, but largely challenges. Conservation action is necessary along the entire continuum of novelty, by redoubling efforts to protect areas where novelty is low, identifying conservation opportunities where novelty is high, developing flexible yet strong regulations and policies, and establishing long-term experiments to test management approaches. Meeting the challenge of novelty will require advances in the science of ecology, and new and creative. conservation approaches.",
"title": ""
},
{
"docid": "c1a8e30586aad77395e429556545675c",
"text": "We investigate techniques for analysis and retrieval of object trajectories in a two or three dimensional space. Such kind of data usually contain a great amount of noise, that makes all previously used metrics fail. Therefore, here we formalize non-metric similarity functions based on the Longest Common Subsequence (LCSS), which are very robust to noise and furthermore provide an intuitive notion of similarity between trajectories by giving more weight to the similar portions of the sequences. Stretching of sequences in time is allowed, as well as global translating of the sequences in space. Efficient approximate algorithms that compute these similarity measures are also provided. We compare these new methods to the widely used Euclidean and Time Warping distance functions (for real and synthetic data) and show the superiority of our approach, especially under the strong presence of noise. We prove a weaker version of the triangle inequality and employ it in an indexing structure to answer nearest neighbor queries. Finally, we present experimental results that validate the accuracy and efficiency of our approach.",
"title": ""
},
{
"docid": "efe99cd2282373a7da3250af989b86e3",
"text": "In this work, analog application for the Sliding Mode Control (SMC) to piezoelectric actuators (PEA) is presented. DSP application of the algorithm suffers from ADC and DAC conversions and mainly faces limitations in sampling time interval. Moreover piezoelectric actuators are known to have very large bandwidth close to the DSP operation frequency. Therefore, with the direct analog application, improvement of the performance and high frequency operation are expected. Design of an appropriate SMC together with a disturbance observer is suggested to have continuous control output and related experimental results for position tracking are presented with comparison of DSP and analog control application.",
"title": ""
},
{
"docid": "cb413e9b170736fc746031fae567b168",
"text": "3D integration is a fast growing field that encompasses different types of technologies. The paper addresses one of the most promising technology which uses Through Silicon Vias (TSV) for interconnecting stacked devices on wafer level to perform high density interconnects with a good electrical performance at the smallest form factor for 3D architectures. Fraunhofer IZM has developed a post front-end 3D integration process which allows stacking of functional and tested FE-devices e.g. sensors, ASICs on wafer level as well as a technology portfolio for passive silicon interposer with redistribution layers and TSV.",
"title": ""
},
{
"docid": "c183e77e531141ea04b7ea95149be70a",
"text": "Millions of computer end users need to perform tasks over large spreadsheet data, yet lack the programming knowledge to do such tasks automatically. We present a programming by example methodology that allows end users to automate such repetitive tasks. Our methodology involves designing a domain-specific language and developing a synthesis algorithm that can learn programs in that language from user-provided examples. We present instantiations of this methodology for particular domains of tasks: (a) syntactic transformations of strings using restricted forms of regular expressions, conditionals, and loops, (b) semantic transformations of strings involving lookup in relational tables, and (c) layout transformations on spreadsheet tables. We have implemented this technology as an add-in for the Microsoft Excel Spreadsheet system and have evaluated it successfully over several benchmarks picked from various Excel help forums.",
"title": ""
},
{
"docid": "8c067af7b61fae244340e784149a9c9b",
"text": "Based on EuroNCAP regulations the number of autonomous emergency braking systems for pedestrians (AEB-P) will increase over the next years. According to accident research a considerable amount of severe pedestrian accidents happen at artificial lighting, twilight or total darkness conditions. Because radar sensors are very robust in these situations, they will play an important role for future AEB-P systems. To assess and evaluate systems a pedestrian dummy with reflection characteristics as close as possible to real humans is indispensable. As an extension to existing measurements in literature this paper addresses open issues like the influence of different positions of the limbs or different clothing for both relevant automotive frequency bands. Additionally suggestions and requirements for specification of pedestrian dummies based on results of RCS measurements of humans and first experimental developed dummies are given.",
"title": ""
},
{
"docid": "0069b06db18ea5d2c6079fcb9f1bae92",
"text": "State-of-the-art techniques in Generative Adversarial Networks (GANs) such as cycleGAN is able to learn the mapping of one image domain X to another image domain Y using unpaired image data. We extend the cycleGAN to Conditional cycleGAN such that the mapping from X to Y is subjected to attribute condition Z. Using face image generation as an application example, where X is a low resolution face image, Y is a high resolution face image, and Z is a set of attributes related to facial appearance (e.g. gender, hair color, smile), we present our method to incorporate Z into the network, such that the hallucinated high resolution face image Y ′ not only satisfies the low resolution constrain inherent in X , but also the attribute condition prescribed by Z. Using face feature vector extracted from face verification network as Z, we demonstrate the efficacy of our approach on identitypreserving face image super-resolution. Our approach is general and applicable to high-quality face image generation where specific facial attributes can be controlled easily in the automatically generated results.",
"title": ""
},
{
"docid": "5d673d1b6755e3e1d451ca17644cf3ec",
"text": "The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm’s key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.",
"title": ""
},
{
"docid": "b9221d254083fe875c8e81bc8f442403",
"text": "On multi-core processors, applications are run sharing the cache. This paper presents optimization theory to co-locate applications to minimize cache interference and maximize performance. The theory precisely specifies MRC-based composition, optimization, and correctness conditions. The paper also presents a new technique called footprint symbiosis to obtain the best shared cache performance under fair CPU allocation as well as a new sampling technique which reduces the cost of locality analysis. When sampling and optimization are combined, the paper shows that it takes less than 0.1 second analysis per program to obtain a co-run that is within 1.5 percent of the best possible performance. In an exhaustive evaluation with 12,870 tests, the best prior work improves co-run performance by 56 percent on average. The new optimization improves it by another 29 percent. Without single co-run test, footprint symbiosis is able to choose co-run choices that are just 8 percent slower than the best co-run solutions found with exhaustive testing.",
"title": ""
},
{
"docid": "d9dd14f6c28ad3ae3814cb517e2430d1",
"text": "Volunteer geographical information (VGI), either in the context of citizen science or the mining of social media, has proven to be useful in various domains including natural hazards, health status, disease epidemics, and biological monitoring. Nonetheless, the variable or unknown data quality due to crowdsourcing settings are still an obstacle for fully integrating these data sources in environmental studies and potentially in policy making. The data curation process, in which a quality assurance (QA) is needed, is often driven by the direct usability of the data collected within a data conflation process or data fusion (DCDF), combining the crowdsourced data into one view, using potentially other data sources as well. Looking at current practices in VGI data quality and using two examples, namely land cover validation and inundation extent estimation, this paper discusses the close links between QA and DCDF. It aims to help in deciding whether a disentanglement can be possible, whether beneficial or not, in understanding the data curation process with respect to its methodology for future usage of crowdsourced data. Analysing situations throughout the data curation process where and when entanglement between QA and DCDF occur, the paper explores the various facets of VGI data capture, as well as data quality assessment and purposes. Far from rejecting the usability ISO quality criterion, the paper advocates for a decoupling of the QA process and the DCDF step as much as possible while still integrating them within an approach analogous to a Bayesian paradigm.",
"title": ""
},
{
"docid": "714df72467bc3e919b7ea7424883cf26",
"text": "Although a lot of attention has been paid to software cost estimation since 1960, making accurate effort and schedule estimation is still a challenge. To collect evidence and identify potential areas of improvement in software cost estimation, it is important to investigate the estimation accuracy, the estimation method used, and the factors influencing the adoption of estimation methods in current industry. This paper analyzed 112 projects from the Chinese software project benchmarking dataset and conducted questionnaire survey on 116 organizations to investigate the above information. The paper presents the current situations related to software project estimation in China and provides evidence-based suggestions on how to improve software project estimation. Our survey results suggest, e.g., that large projects were more prone to cost and schedule overruns, that most computing managers and professionals were neither satisfied nor dissatisfied with the project estimation, that very few organizations (15%) used model-based methods, and that the high adoption cost and insignificant benefit after adoption were the main causes for low use of model-based methods.",
"title": ""
},
{
"docid": "27e1d29dc8d252081e80f93186a14660",
"text": "Over the last several years there has been an increasing focus on early detection of Autism Spectrum Disorder (ASD), not only from the scientific field but also from professional associations and public health systems all across Europe. Not surprisingly, in order to offer better services and quality of life for both children with ASD and their families, different screening procedures and tools have been developed for early assessment and intervention. However, current evidence is needed for healthcare providers and policy makers to be able to implement specific measures and increase autism awareness in European communities. The general aim of this review is to address the latest and most relevant issues related to early detection and treatments. The specific objectives are (1) analyse the impact, describing advantages and drawbacks, of screening procedures based on standardized tests, surveillance programmes, or other observational measures; and (2) provide a European framework of early intervention programmes and practices and what has been learnt from implementing them in public or private settings. This analysis is then discussed and best practices are suggested to help professionals, health systems and policy makers to improve their local procedures or to develop new proposals for early detection and intervention programmes.",
"title": ""
},
{
"docid": "93f89a636828df50dfe48ffa3e868ea6",
"text": "The reparameterization trick enables the optimization of large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-likelihood of latent stochastic nodes) on the corresponding discrete graph. We demonstrate their effectiveness on density estimation and structured prediction tasks using neural networks.",
"title": ""
},
{
"docid": "2cd327bd5a7814776825e090b12664ec",
"text": "is an open access repository that collects the work of Arts et Métiers ParisTech researchers and makes it freely available over the web where possible. This article proposes a method based on wavelet transform and neural networks for relating pupillary behavior to psychological stress. The proposed method was tested by recording pupil diameter and electrodermal activity during a simulated driving task. Self-report measures were also collected. Participants performed a baseline run with the driving task only, followed by three stress runs where they were required to perform the driving task along with sound alerts, the presence of two human evaluators, and both. Self-reports and pupil diameter successfully indexed stress manipulation, and significant correlations were found between these measures. However, electrodermal activity did not vary accordingly. After training, the four-way parallel neu-ral network classifier could guess whether a given unknown pupil diameter signal came from one of the four experimental trials with 79.2% precision. The present study shows that pupil diameter signal has good discriminating power for stress detection. 1. INTRODUCTION Stress detection and measurement are important issues in several human–computer interaction domains such as Affective Computing, Adaptive Automation, and Ambient Intelligence. In general, researchers and system designers seek to estimate the psychological state of operators in order to adapt or redesign the working environment accordingly (Sauter, 1991). The primary goal of such adaptation is to enhance overall system performance, trying to reduce workers' psychophysi-cal detriment (e. One key aspect of stress measurement concerns the recording of physiological parameters, which are known to be modulated by the autonomic nervous system (ANS). However, despite",
"title": ""
},
{
"docid": "0f8bf207201692ad4905e28a2993ef29",
"text": "Bluespec System Verilog is an EDL toolset for ASIC and FPGA design offering significantly higher productivity via a radically different approach to high-level synthesis. Many other attempts at high-level synthesis have tried to move the design language towards a more software-like specification of the behavior of the intended hardware. By means of code samples, demonstrations and measured results, we illustrate how Bluespec System Verilog, in an environment familiar to hardware designers, can significantly improve productivity without compromising generated hardware quality.",
"title": ""
}
] |
scidocsrr
|
235c7f8204b6bcf94d528543fcbb9097
|
Depth Separation for Neural Networks
|
[
{
"docid": "7d33ba30fd30dce2cd4a3f5558a8c0ba",
"text": "It has long been conjectured that hypothesis spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical architectures than with shallow ones. Despite the vast empirical evidence, formal arguments to date are limited and do not capture the kind of networks used in practice. Using tensor factorization, we derive a universal hypothesis space implemented by an arithmetic circuit over functions applied to local data structures (e.g. image patches). The resulting networks first pass the input through a representation layer, and then proceed with a sequence of layers comprising sum followed by product-pooling, where sum corresponds to the widely used convolution operator. The hierarchical structure of networks is born from factorizations of tensors based on the linear weights of the arithmetic circuits. We show that a shallow network corresponds to a rank-1 decomposition, whereas a deep network corresponds to a Hierarchical Tucker (HT) decomposition. Log-space computation for numerical stability transforms the networks into SimNets.",
"title": ""
},
{
"docid": "40b78c5378159e9cdf38275a773b8109",
"text": "For a common class of artificial neural networks, the mean integrated squared error between the estimated network and a target function f is shown to be bounded by $${\\text{O}}\\left( {\\frac{{C_f^2 }}{n}} \\right) + O(\\frac{{ND}}{N}\\log N)$$ where n is the number of nodes, d is the input dimension of the function, N is the number of training observations, and C f is the first absolute moment of the Fourier magnitude distribution of f. The two contributions to this total risk are the approximation error and the estimation error. Approximation error refers to the distance between the target function and the closest neural network function of a given architecture and estimation error refers to the distance between this ideal network function and an estimated network function. With n ~ C f(N/(dlog N))1/2 nodes, the order of the bound on the mean integrated squared error is optimized to be O(C f((d/N)log N)1/2). The bound demonstrates surprisingly favorable properties of network estimation compared to traditional series and nonparametric curve estimation techniques in the case that d is moderately large. Similar bounds are obtained when the number of nodes n is not preselected as a function of C f (which is generally not known a priori), but rather the number of nodes is optimized from the observed data by the use of a complexity regularization or minimum description length criterion. The analysis involves Fourier techniques for the approximation error, metric entropy considerations for the estimation error, and a calculation of the index of resolvability of minimum complexity estimation of the family of networks.",
"title": ""
},
{
"docid": "6efdf43a454ce7da51927c07f1449695",
"text": "We investigate efficient representations of functions that can be written as outputs of so-called sum-product networks, that alternate layers of product and sum operations (see Fig 1 for a simple sum-product network). We find that there exist families of such functions that can be represented much more efficiently by deep sum-product networks (i.e. allowing multiple hidden layers), compared to shallow sum-product networks (constrained to using a single hidden layer). For instance, there is a family of functions fn where n is the number of input variables, such that fn can be computed with a deep sum-product network of log 2 n layers and n−1 units, while a shallow sum-product network (two layers) requires 2 √ n−1 units. These mathematical results are in the same spirit as those by H̊astad and Goldmann (1991) on the limitations of small depth computational circuits. They motivate using deep networks to be able to model complex functions more efficiently than with shallow networks. Exponential gains in terms of the number of parameters are quite significant in the context of statistical machine learning. Indeed, the number of training samples required to optimize a model’s parameters without suffering from overfitting typically increases with the number of parameters. Deep networks thus offer a promising way to learn complex functions from limited data, even though parameter optimization may still be challenging.",
"title": ""
}
] |
[
{
"docid": "96d123a5c9a01922ebb99623fddd1863",
"text": "Previous studies have shown that Wnt signaling is involved in postnatal mammalian myogenesis; however, the downstream mechanism of Wnt signaling is not fully understood. This study reports that the murine four-and-a-half LIM domain 1 (Fhl1) could be stimulated by β-catenin or LiCl treatment to induce myogenesis. In contrast, knockdown of the Fhl1 gene expression in C2C12 cells led to reduced myotube formation. We also adopted reporter assays to demonstrate that either β-catenin or LiCl significantly activated the Fhl1 promoter, which contains four putative consensus TCF/LEF binding sites. Mutations of two of these sites caused a significant decrease in promoter activity by luciferase reporter assay. Thus, we suggest that Wnt signaling induces muscle cell differentiation, at least partly, through Fhl1 activation.",
"title": ""
},
{
"docid": "05092df698f691d35df8d4bc0008ec8f",
"text": "BACKGROUND\nPurpura fulminans is a rare and extremely severe infection, mostly due to Neisseria meningitidis frequently causing early orthopedic lesions. Few studies have reported on the initial surgical management of acute purpura fulminans. The aim of this study is to look at the predictive factors in orthopedic outcome in light of the initial surgical management in children surviving initial resuscitation.\n\n\nMETHODS\nNineteen patients referred to our institution between 1987 and 2005 were taken care of at the very beginning of the purpura fulminans. All cases were retrospectively reviewed so as to collect information on the total skin necrosis, vascular insufficiency, gangrene, and total duration of vasopressive treatment.\n\n\nRESULTS\nAll patients had multiorgan failure; only one never developed any skin necrosis or ischemia. Eighteen patients lost tissue, leading to 22 skin grafts, including two total skin grafts. There was only one graft failure. Thirteen patients were concerned by an amputation, representing, in total, 54 fingers, 36 toes, two transmetatarsal, and ten transtibial below-knee amputations, with a mean delay of 4 weeks after onset of the disease. Necrosis seems to affect mainly the lower limbs, but there is no predictive factor that impacted on the orthopedic outcome. We did not perform any fasciotomy or compartment pressure measurement to avoid non-perfusion worsening; nonetheless, our outcome in this series is comparable to existing series in the literature. V.A.C.(®) therapy could be promising regarding the management of skin necrosis in this particular context. While suffering from general multiorgan failure, great care should be observed not to miss any additional osseous or articular infection, as some patients also develop local osteitis and osteomyelitis that are often not diagnosed.\n\n\nCONCLUSIONS\nWe do not advocate very early surgery during the acute phase of purpura fulminans, as it does not change the orthopedic outcome in these children. By performing amputations and skin coverage some time after the acute phase, we obtained similar results to those found in the literature.",
"title": ""
},
{
"docid": "b7ee04e61d8666b6d865e69e24f69a6f",
"text": "CONTEXT\nThis article presents the main results from a large-scale analytical systematic review on knowledge exchange interventions at the organizational and policymaking levels. The review integrated two broad traditions, one roughly focused on the use of social science research results and the other focused on policymaking and lobbying processes.\n\n\nMETHODS\nData collection was done using systematic snowball sampling. First, we used prospective snowballing to identify all documents citing any of a set of thirty-three seminal papers. This process identified 4,102 documents, 102 of which were retained for in-depth analysis. The bibliographies of these 102 documents were merged and used to identify retrospectively all articles cited five times or more and all books cited seven times or more. All together, 205 documents were analyzed. To develop an integrated model, the data were synthesized using an analytical approach.\n\n\nFINDINGS\nThis article developed integrated conceptualizations of the forms of collective knowledge exchange systems, the nature of the knowledge exchanged, and the definition of collective-level use. This literature synthesis is organized around three dimensions of context: level of polarization (politics), cost-sharing equilibrium (economics), and institutionalized structures of communication (social structuring).\n\n\nCONCLUSIONS\nThe model developed here suggests that research is unlikely to provide context-independent evidence for the intrinsic efficacy of knowledge exchange strategies. To design a knowledge exchange intervention to maximize knowledge use, a detailed analysis of the context could use the kind of framework developed here.",
"title": ""
},
{
"docid": "b89f999bd27a6cbe1865f8853e384eba",
"text": "A rescue crawler robot with flipper arms has high ability to get over rough terrain, but it is hard to control its flipper arms in remote control. The authors aim at development of a semi-autonomous control system for the solution. In this paper, the authors propose a sensor reflexive method that controls these flippers autonomously for getting over unknown steps. Our proposed method is effective in unknown and changeable environment. The authors applied the proposed method to Aladdin, and examined validity of these control rules in unknown environment.",
"title": ""
},
{
"docid": "e1e836fe6ff690f9c85443d26a1448e3",
"text": "■ We describe an apparatus and methodology to support real-time color imaging for night operations. Registered imagery obtained in the visible through nearinfrared band is combined with thermal infrared imagery by using principles of biological opponent-color vision. Visible imagery is obtained with a Gen III image intensifier tube fiber-optically coupled to a conventional charge-coupled device (CCD), and thermal infrared imagery is obtained by using an uncooled thermal imaging array. The two fields of view are matched and imaged through a dichroic beam splitter to produce realistic color renderings of a variety of night scenes. We also demonstrate grayscale and color fusion of intensified-CCD/FLIR imagery. Progress in the development of a low-light-sensitive visible CCD imager with high resolution and wide intrascene dynamic range, operating at thirty frames per second, is described. Example low-light CCD imagery obtained under controlled illumination conditions, from full moon down to overcast starlight, processed by our adaptive dynamic-range algorithm, is shown. The combination of a low-light visible CCD imager and a thermal infrared microbolometer array in a single dualband imager, with a portable image-processing computer implementing our neuralnet algorithms, and color liquid-crystal display, yields a compact integrated version of our system as a solid-state color night-vision device. The systems described here can be applied to a large variety of military operations and civilian needs.",
"title": ""
},
{
"docid": "3419c35e0dff7b47328943235419a409",
"text": "Several methods of classification of partially edentulous arches have been proposed and are in use. The most familiar classifications are those originally proposed by Kennedy, Cummer, and Bailyn. None of these classification systems include implants, simply because most of them were proposed before implants became widely accepted. At this time, there is no classification system for partially edentulous arches incorporating implants placed or to be placed in the edentulous spaces for a removable partial denture (RPD). This article proposes a simple classification system for partially edentulous arches with implants based on the Kennedy classification system, with modification, to be used for RPDs. It incorporates the number and positions of implants placed or to be placed in the edentulous areas. A different name, Implant-Corrected Kennedy (ICK) Classification System, is given to the new classification system to be differentiated from other partially edentulous arch classification systems.",
"title": ""
},
{
"docid": "f6f984853e9fa9a77e3f2c473a9a05d8",
"text": "Autonomous driving within the pedestrian environment is always challenging, as the perception ability is limited by the crowdedness and the planning process is constrained by the complicated human behaviors. In this paper, we present a vehicle planning system for self-driving with limited perception in the pedestrian environment. Acknowledging the difficulty of obstacle detection and tracking within the crowded pedestrian environment, only the raw LIDAR sensing data is employed for the purpose of traversability analysis and vehicle planning. The designed vehicle planning system has been experimentally validated to be robust and safe within the populated pedestrian environment.",
"title": ""
},
{
"docid": "0e012c89f575d116e94b1f6718c8fe4d",
"text": "Tagging is an increasingly important task in natural language processing domains. As there are many natural language processing tasks which can be improved by applying disambiguation to the text, fast and high quality tagging algorithms are a crucial task in information retrieval and question answering. Tagging aims to assigning to each word of a text its correct tag according to the context in which the word is used. Part Of Speech (POS) tagging is a difficult problem by itself, since many words has a number of possible tags associated to it. In this paper we present a novel algorithm that deals with POS-tagging problem based on Harmony Search (HS) optimization method. This paper analyzes the relative advantages of HS metaheuristic approache to the well-known natural language processing problem of POS-tagging. In the experiments we conducted, we applied the proposed algorithm on linguistic corpora and compared the results obtained against other optimization methods such as genetic and simulated annealing algorithms. Experimental results reveal that the proposed algorithm provides more accurate results compared to the other algorithms.",
"title": ""
},
{
"docid": "0506a7f5dddf874487c90025dff0bc7d",
"text": "This paper presents a low-power decision-feedback equalizer (DFE) receiver front-end and a two-step minimum bit-error-rate (BER) adaptation algorithm. A high energy efficiency of 0.46 mW/Gbps is made possible by the combination of a direct-feedback finite-impulse-response (FIR) DFE, an infinite-impulse-response (IIR) DFE, and a clock-and-data recovery (CDR) circuit with adjustable timing offsets. Based on this architecture, the power-hungry stages used in prior DFE receivers such as the continuous-time linear equalizer (CTLE), the current-mode summing circuit for a multitap DFE, and the fast selection logic for a loop-unrolling DFE can all be removed. A two-step adaptation algorithm that finds the equalizer coefficients minimizing the BER is described. First, an extra data sampler with adjustable voltage and timing offsets measures the single-bit response (SBR) of the channel and coarsely tunes the initial coefficient values in the foreground. Next, the same circuit measures the eye-opening and bit-error rates and fine tunes the coefficients in background using a stochastic hill-climbing algorithm. A prototype DFE receiver fabricated in a 65-nm LP/RF CMOS dissipates 2.3 mW and demonstrates measured eye-opening values of 174 mV pp and 0.66 UIpp while operating at 5 Gb/s with a -15-dB loss channel.",
"title": ""
},
{
"docid": "e9326cb2e3b79a71d9e99105f0259c5a",
"text": "Although drugs are intended to be selective, at least some bind to several physiological targets, explaining side effects and efficacy. Because many drug–target combinations exist, it would be useful to explore possible interactions computationally. Here we compared 3,665 US Food and Drug Administration (FDA)-approved and investigational drugs against hundreds of targets, defining each target by its ligands. Chemical similarities between drugs and ligand sets predicted thousands of unanticipated associations. Thirty were tested experimentally, including the antagonism of the β1 receptor by the transporter inhibitor Prozac, the inhibition of the 5-hydroxytryptamine (5-HT) transporter by the ion channel drug Vadilex, and antagonism of the histamine H4 receptor by the enzyme inhibitor Rescriptor. Overall, 23 new drug–target associations were confirmed, five of which were potent (<100 nM). The physiological relevance of one, the drug N,N-dimethyltryptamine (DMT) on serotonergic receptors, was confirmed in a knockout mouse. The chemical similarity approach is systematic and comprehensive, and may suggest side-effects and new indications for many drugs.",
"title": ""
},
{
"docid": "8f137f55376693eeedb8fc5b1e86518a",
"text": "Previous studies have shown that both αA- and αB-crystallins bind Cu2+, suppress the formation of Cu2+-mediated active oxygen species, and protect ascorbic acid from oxidation by Cu2+. αA- and αB-crystallins are small heat shock proteins with molecular chaperone activity. In this study we show that the mini-αA-crystallin, a peptide consisting of residues 71-88 of αA-crystallin, prevents copper-induced oxidation of ascorbic acid. Evaluation of binding of copper to mini-αA-crystallin showed that each molecule of mini-αA-crystallin binds one copper molecule. Isothermal titration calorimetry and nanospray mass spectrometry revealed dissociation constants of 10.72 and 9.9 μM, respectively. 1,1'-Bis(4-anilino)naphthalene-5,5'-disulfonic acid interaction with mini-αA-crystallin was reduced after binding of Cu2+, suggesting that the same amino acids interact with these two ligands. Circular dichroism spectrometry showed that copper binding to mini-αA-crystallin peptide affects its secondary structure. Substitution of the His residue in mini-αA-crystallin with Ala abolished the redox-suppression activity of the peptide. During the Cu2+-induced ascorbic acid oxidation assay, a deletion mutant, αAΔ70-77, showed about 75% loss of ascorbic acid protection compared to the wild-type αA-crystallin. This difference indicates that the 70-77 region is the primary Cu2+-binding site(s) in human native full-size αA-crystallin. The role of the chaperone site in Cu2+ binding in native αA-crystallin was confirmed by the significant loss of chaperone activity by the peptide after Cu2+ binding.",
"title": ""
},
{
"docid": "565efa7a51438990b3d8da6222dca407",
"text": "The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works.",
"title": ""
},
{
"docid": "28c3e990b40b62069010e0a7f94adb11",
"text": "Steep sub-threshold transistors are promising candidates to replace the traditional MOSFETs for sub-threshold leakage reduction. In this paper, we explore the use of Inter-Band Tunnel Field Effect Transistors (TFETs) in SRAMs at ultra low supply voltages. The uni-directional current conducting TFETs limit the viability of 6T SRAM cells. To overcome this limitation, 7T SRAM designs were proposed earlier at the cost of extra silicon area. In this paper, we propose a novel 6T SRAM design using Si-TFETs for reliable operation with low leakage at ultra low voltages. We also demonstrate that a functional 6T TFET SRAM design with comparable stability margins and faster performances at low voltages can be realized using proposed design when compared with the 7T TFET SRAM cell. We achieve a leakage reduction improvement of 700X and 1600X over traditional CMOS SRAM designs at VDD of 0.3V and 0.5V respectively which makes it suitable for use at ultra-low power applications.",
"title": ""
},
{
"docid": "ec4b7c50f3277bb107961c9953fe3fc4",
"text": "A blockchain is a linked-list of immutable tamper-proof blocks, which is stored at each participating node. Each block records a set of transactions and the associated metadata. Blockchain transactions act on the identical ledger data stored at each node. Blockchain was first perceived by Satoshi Nakamoto (Satoshi 2008), as a peer-to-peer money exchange system. Nakamoto referred to the transactional tokens exchanged among clients in his system, as Bitcoins. Overview",
"title": ""
},
{
"docid": "55a29653163bdf9599bf595154a99a25",
"text": "The effect of the steel slag aggregate aging on mechanical properties of the high performance concrete is analysed in the paper. The effect of different aging periods of steel slag aggregate on mechanical properties of high performance concrete is studied. It was observed that properties of this concrete are affected by the steel slag aggregate aging process. The compressive strength increases with an increase in the aging period of steel slag aggregate. The flexural strength, Young’s modulus, and impact strength of concrete, increase at the rate similar to that of the compressive strength. The workability and the abrasion loss of concrete decrease with an increase of the steel slag aggregate aging period.",
"title": ""
},
{
"docid": "aff504d1c2149d13718595fd3e745eb0",
"text": "Figure 1 illustrates a typical example of a prediction problem: given some noisy observations of a dependent variable at certain values of the independent variable , what is our best estimate of the dependent variable at a new value, ? If we expect the underlying function to be linear, and can make some assumptions about the input data, we might use a least-squares method to fit a straight line (linear regression). Moreover, if we suspect may also be quadratic, cubic, or even nonpolynomial, we can use the principles of model selection to choose among the various possibilities. Gaussian process regression (GPR) is an even finer approach than this. Rather than claiming relates to some specific models (e.g. ), a Gaussian process can represent obliquely, but rigorously, by letting the data ‘speak’ more clearly for themselves. GPR is still a form of supervised learning, but the training data are harnessed in a subtler way. As such, GPR is a less ‘parametric’ tool. However, it’s not completely free-form, and if we’re unwilling to make even basic assumptions about , then more general techniques should be considered, including those underpinned by the principle of maximum entropy; Chapter 6 of Sivia and Skilling (2006) offers an introduction.",
"title": ""
},
{
"docid": "a1d96f46cd4fa625da9e1bf2f6299c81",
"text": "The availability of increasingly higher power commercial microwave monolithic integrated circuit (MMIC) amplifiers enables the construction of solid state amplifiers achieving output powers and performance previously achievable only from traveling wave tube amplifiers (TWTAs). A high efficiency power amplifier incorporating an antipodal finline antenna array within a coaxial waveguide is investigated at Ka Band. The coaxial waveguide combiner structure is used to demonstrate a 120 Watt power amplifier from 27 to 31GHz by combining quantity (16), 10 Watt GaN MMIC devices; achieving typical PAE of 25% for the overall power amplifier assembly.",
"title": ""
},
{
"docid": "fb58d6fe77092be4bce5dd0926c563de",
"text": "We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with further data exploration and hypothesis generation. MGM extracts distinguishing features on real-world datasets of animal features, recipes ingredients, and disease co-occurrence. It also maintains or improves performance when compared to related approaches. We perform a user study with domain experts to show the MGM’s ability to help with dataset exploration.",
"title": ""
},
{
"docid": "d41cd48a377afa6b95598d2df6a27b08",
"text": "Graph-based approaches have been most successful in semisupervised learning. In this paper, we focus on label propagation in graph-based semisupervised learning. One essential point of label propagation is that the performance is heavily affected by incorporating underlying manifold of given data into the input graph. The other more important point is that in many recent real-world applications, the same instances are represented by multiple heterogeneous data sources. A key challenge under this setting is to integrate different data representations automatically to achieve better predictive performance. In this paper, we address the issue of obtaining the optimal linear combination of multiple different graphs under the label propagation setting. For this problem, we propose a new formulation with the sparsity (in coefficients of graph combination) property which cannot be rightly achieved by any other existing methods. This unique feature provides two important advantages: 1) the improvement of prediction performance by eliminating irrelevant or noisy graphs and 2) the interpretability of results, i.e., easily identifying informative graphs on classification. We propose efficient optimization algorithms for the proposed approach, by which clear interpretations of the mechanism for sparsity is provided. Through various synthetic and two real-world data sets, we empirically demonstrate the advantages of our proposed approach not only in prediction performance but also in graph selection ability.",
"title": ""
},
{
"docid": "7bc2bacc409341415c8ac9ca3c617c9b",
"text": "Many tasks in artificial intelligence require the collaboration of multiple agents. We exam deep reinforcement learning for multi-agent domains. Recent research efforts often take the form of two seemingly conflicting perspectives, the decentralized perspective, where each agent is supposed to have its own controller; and the centralized perspective, where one assumes there is a larger model controlling all agents. In this regard, we revisit the idea of the master-slave architecture by incorporating both perspectives within one framework. Such a hierarchical structure naturally leverages advantages from one another. The idea of combining both perspectives is intuitive and can be well motivated from many real world systems, however, out of a variety of possible realizations, we highlights three key ingredients, i.e. composed action representation, learnable communication and independent reasoning. With network designs to facilitate these explicitly, our proposal consistently outperforms latest competing methods both in synthetic experiments and when applied to challenging StarCraft1 micromanagement tasks.",
"title": ""
}
] |
scidocsrr
|
375739927ac2c48bd2575c5fb608bfaf
|
Aligned Cluster Analysis for temporal segmentation of human motion
|
[
{
"docid": "ae58bc6ced30bf2c855473541840ec4d",
"text": "Techniques from the image and signal processing domain can be successfully applied to designing, modifying, and adapting animated motion. For this purpose, we introduce multiresolution motion filtering, multitarget motion interpolation with dynamic timewarping, waveshaping and motion displacement mapping. The techniques are well-suited for reuse and adaptation of existing motion data such as joint angles, joint coordinates or higher level motion parameters of articulated figures with many degrees of freedom. Existing motions can be modified and combined interactively and at a higher level of abstraction than conventional systems support. This general approach is thus complementary to keyframing, motion capture, and procedural animation.",
"title": ""
}
] |
[
{
"docid": "d90add899632bab1c5c2637c7080f717",
"text": "Software Testing plays a important role in Software development because it can minimize the development cost. We Propose a Technique for Test Sequence Generation using UML Model Sequence Diagram.UML models give a lot of information that should not be ignored in testing. In This paper main features extract from Sequence Diagram after that we can write the Java Source code for that Features According to ModelJunit Library. ModelJUnit is a extended library of JUnit Library. By using that Source code we can Generate Test Case Automatic and Test Coverage. This paper describes a systematic Test Case Generation Technique performed on model based testing (MBT) approaches By Using Sequence Diagram.",
"title": ""
},
{
"docid": "3580c05a6564e7e09c6577026da69fe9",
"text": "Inpainting based image compression approaches, especially linear and non-linear diffusion models, are an active research topic for lossy image compression. The major challenge in these compression models is to find a small set of descriptive supporting points, which allow for an accurate reconstruction of the original image. It turns out in practice that this is a challenging problem even for the simplest Laplacian interpolation model. In this paper, we revisit the Laplacian interpolation compression model and introduce two fast algorithms, namely successive preconditioning primal dual algorithm and the recently proposed iPiano algorithm, to solve this problem efficiently. Furthermore, we extend the Laplacian interpolation based compression model to a more general form, which is based on principles from bi-level optimization. We investigate two different variants of the Laplacian model, namely biharmonic interpolation and smoothed Total Variation regularization. Our numerical results show that significant improvements can be obtained from the biharmonic interpolation model, and it can recover an image with very high quality from only 5% pixels.",
"title": ""
},
{
"docid": "a7607444b58f0e86000c7f2d09551fcc",
"text": "Background modeling is a critical component for various vision-based applications. Most traditional methods tend to be inefficient when solving large-scale problems. In this paper, we introduce sparse representation into the task of large-scale stable-background modeling, and reduce the video size by exploring its discriminative frames. A cyclic iteration process is then proposed to extract the background from the discriminative frame set. The two parts combine to form our sparse outlier iterative removal (SOIR) algorithm. The algorithm operates in tensor space to obey the natural data structure of videos. Experimental results show that a few discriminative frames determine the performance of the background extraction. Furthermore, SOIR can achieve high accuracy and high speed simultaneously when dealing with real video sequences. Thus, SOIR has an advantage in solving large-scale tasks.",
"title": ""
},
{
"docid": "c3c3add0c42f3b98962c4682a72b1865",
"text": "This paper compares to investigate output characteristics according to a conventional and novel stator structure of axial flux permanent magnet (AFPM) motor for cooling fan drive system. Segmented core of stator has advantages such as easy winding and fast manufacture speed. However, a unit cost increase due to cutting off tooth tip to constant slot width. To solve the problem, this paper proposes a novel stator structure with three-step segmented core. The characteristics of AFPM were analyzed by time-stepping three dimensional finite element analysis (3D FEA) in two stator models, when stator cores are cutting off tooth tips from rectangular core and three step segmented core. Prototype motors were manufactured based on analysis results, and were tested as a motor.",
"title": ""
},
{
"docid": "b109db8e315d904901021224745c9e26",
"text": "IP lookup and routing table update affect the speed at which a router forwards packets. This study proposes a new data structure for dynamic router tables used in IP lookup and update, called the Multi-inherited Search Tree (MIST). Partitioning each prefix according to an index value and removing the relationships among prefixes enables performing IP lookup operations efficiently. Because a prefix trie is used as a substructure, memory can be consumed and dynamic router-table operations can be performed efficiently. Experiments using real IPv4 routing databases indicated that the MIST uses memory efficiently and performs lookup, insert, and delete operations effectively.",
"title": ""
},
{
"docid": "41aa05455471ecd660599f4ec285ff29",
"text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.",
"title": ""
},
{
"docid": "d500b28961f2346f1caac6a11fe9b2bd",
"text": "In the late 19th century, DeWecker initially described the use of optic nerve sheath fenestration (ONSF) in a case of neuroretinitis at a time when little was known about the pathophysiology of optic nerve swelling. The procedure lay relatively dormant until renewed interest arose from studies investigating the axonal basis of papilledema and its resolution with ONSF. This surgery has been utilized in a variety of other optic nerve conditions not related to papilledema, with largely disappointing results, including the Ischemic Optic Neuropathy Decompression Trial. Although prospective clinical trials have not been performed to compare the efficacy of ONSF to other treatment modalities like shunting procedures, many studies have confirmed that ONSF can play a significant role in preventing vision loss in conditions where intracranial pressure (ICP) is elevated, like idiopathic intracranial hypertension (IIH).",
"title": ""
},
{
"docid": "1653caa3ac10c831eddd6dfdbffa4725",
"text": "To control and price negative externalities in passenger road transport, we develop an innovative and integrated computational agent based economics (ACE) model to simulate a market oriented “cap” and trade system. (i) First, there is a computational assessment of a digitized road network model of the real world congestion hot spot to determine the “cap” of the system in terms of vehicle volumes at which traffic efficiency deteriorates and the environmental externalities take off exponentially. (ii) Road users submit bids with the market clearing price at the fixed “cap” supply of travel slots in a given time slice (peak hour) being determined by an electronic sealed bid uniform price Dutch auction. (iii) Cross-sectional demand data on car users who traverse the cordon area is used to model and calibrate the heterogeneous bid submission behaviour in order to construct the inverse demand function and demand elasticities. (iv) The willingness to pay approach with heterogeneous value of time is contrasted with the generalized cost approach to pricing congestion with homogenous value of travel time. JEL Classification: R41, R48, C99, D44, H41",
"title": ""
},
{
"docid": "049c6062613d0829cf39cbfe4aedca7a",
"text": "Deep neural networks (DNN) are widely used in many applications. However, their deployment on edge devices has been difficult because they are resource hungry. Binary neural networks (BNN) help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit. We propose an improved binary training method (BNN+), by introducing a regularization function that encourages training weights around binary values. In addition to this, to enhance model performance we add trainable scaling factors to our regularization functions. Furthermore, we use an improved approximation of the derivative of the sign activation function in the backward computation. These additions are based on linear operations that are easily implementable into the binary training framework. We show experimental results on CIFAR-10 obtaining an accuracy of 86.5%, on AlexNet and 91.3% with VGG network. On ImageNet, our method also outperforms the traditional BNN method and XNOR-net, using AlexNet by a margin of 4% and 2% top-1 accuracy respectively.",
"title": ""
},
{
"docid": "659f362b1f30c32cdaca90e3141596fb",
"text": "Purpose – The paper aims to focus on so-called NoSQL databases in the context of cloud computing. Design/methodology/approach – Architectures and basic features of these databases are studied, particularly their horizontal scalability and concurrency model, that is mostly weaker than ACID transactions in relational SQL-like database systems. Findings – Some characteristics like a data model and querying capabilities of NoSQL databases are discussed in more detail. Originality/value – The paper shows vary different data models and query possibilities in a common terminology enabling comparison and categorization of NoSQL databases.",
"title": ""
},
{
"docid": "316f7f744db9f8f66c9f4d5b69e7431d",
"text": "We propose automated sport game models as a novel technical means for the analysis of team sport games. The basic idea is that automated sport game models are based on a conceptualization of key notions in such games and probabilistically derived from a set of previous games. In contrast to existing approaches, automated sport game models provide an analysis that is sensitive to their context and go beyond simple statistical aggregations allowing objective, transparent and meaningful concept definitions. Based on automatically gathered spatio-temporal data by a computer vision system, a model hierarchy is built bottom up, where context-sensitive concepts are instantiated by the application of machine learning techniques. We describe the current state of implementation of the ASPOGAMO system including its computer vision subsystem that realizes the idea of automated sport game models. Their usage is exemplified with an analysis of the final of the soccer World Cup 2006.",
"title": ""
},
{
"docid": "aee91ee5d4cbf51d9ce1344be4e5448c",
"text": "Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.",
"title": ""
},
{
"docid": "8c07982729ca439c8e346cbe018a7198",
"text": "The need for diversification manifests in various recommendation use cases. In this work, we propose a novel approach to diversifying a list of recommended items, which maximizes the utility of the items subject to the increase in their diversity. From a technical perspective, the problem can be viewed as maximization of a modular function on the polytope of a submodular function, which can be solved optimally by a greedy method. We evaluate our approach in an offline analysis, which incorporates a number of baselines and metrics, and in two online user studies. In all the experiments, our method outperforms the baseline methods.",
"title": ""
},
{
"docid": "34fa7e6d5d4f1ab124e3f12462e92805",
"text": "Natural image modeling plays a key role in many vision problems such as image denoising. Image priors are widely used to regularize the denoising process, which is an ill-posed inverse problem. One category of denoising methods exploit the priors (e.g., TV, sparsity) learned from external clean images to reconstruct the given noisy image, while another category of methods exploit the internal prior (e.g., self-similarity) to reconstruct the latent image. Though the internal prior based methods have achieved impressive denoising results, the improvement of visual quality will become very difficult with the increase of noise level. In this paper, we propose to exploit image external patch prior and internal self-similarity prior jointly, and develop an external patch prior guided internal clustering algorithm for image denoising. It is known that natural image patches form multiple subspaces. By utilizing Gaussian mixture models (GMMs) learning, image similar patches can be clustered and the subspaces can be learned. The learned GMMs from clean images are then used to guide the clustering of noisy-patches of the input noisy images, followed by a low-rank approximation process to estimate the latent subspace for image recovery. Numerical experiments show that the proposed method outperforms many state-of-the-art denoising algorithms such as BM3D and WNNM.",
"title": ""
},
{
"docid": "affe52d4bb21526596ba5c131fb871c8",
"text": "Developing large scale software projects involves huge efforts at every stage of the software development life cycle (SDLC). This led researchers and practitioners to develop software processes and methodologies that will assist software developers and improve their operations. Software processes evolved and took multiple approaches to address the different issues of the SDLC. Recently big data analytics applications (BDAA) are in demand as more and more data is collected and stakeholders need effective and efficient software to process them. The goal is not just to be able to process big data, but also arrive at useful conclusions that are accurate and timely. Considering the distinctive characteristics of big data and the available infrastructures, tools and development models, we need to create a systematic approach to the SDLC activities for BDAA development. In this paper, we rely on our earlier work identifying the characteristic and requirements of BDAA and use that to propose appropriate models for their development process. It is necessary to carefully examine this domain and adopt the software processes that best serve the developers and is flexible enough to address the different characteristics of such applications.",
"title": ""
},
{
"docid": "eae9f650b00ecc92377b787c1e0da140",
"text": "Highly reliable data from a sample of 888 white US children, measured serially in a single study, have been used to provide reference data for head circumference from birth to 18 years of age. The present data differ little from those already available for the age range from birth to 36 months of age, but they are considerably higher (about 0.5 cm) at older ages for boys and tend to be slightly higher for girls. These new reference data are smoother across age than those used currently for screening and evaluation. Percentiles for 6-month increments from birth to 6 years have been provided.",
"title": ""
},
{
"docid": "c9ad1daa4ee0d900c1a2aa9838eb9918",
"text": "A central question in human development is how young children gain knowledge so fast. We propose that analogical generalization drives much of this early learning and allows children to generate new abstractions from experience. In this paper, we review evidence for analogical generalization in both children and adults. We discuss how analogical processes interact with the child's changing knowledge base to predict the course of learning, from conservative to domain-general understanding. This line of research leads to challenges to existing assumptions about learning. It shows that (a) it is not enough to consider the distribution of examples given to learners; one must consider the processes learners are applying; (b) contrary to the general assumption, maximizing variability is not always the best route for maximizing generalization and transfer.",
"title": ""
},
{
"docid": "86c3aefe7ab3fa2178da219f57bedf81",
"text": "We present a model constructed for a large consumer products company to assess their vulnerability to disruption risk and quantify its impact on customer service. Risk profiles for the locations and connections in the supply chain are developed using Monte Carlo simulation, and the flow of material and network interactions are modeled using discrete-event simulation. Capturing both the risk profiles and material flow with simulation allows for a clear view of the impact of disruptions on the system. We also model various strategies for coping with the risk in the system in order to maintain product availability to the customer. We discuss the dynamic nature of risk in the network and the importance of proactive planning to mitigate and recover from disruptions.",
"title": ""
},
{
"docid": "2185097978553d5030252ffa9240fb3c",
"text": "The concept of celebrity culture remains remarkably undertheorized in the literature, and it is precisely this gap that this article aims to begin filling in. Starting with media culture definitions, celebrity culture is conceptualized as collections of sense-making practices whose main resources of meaning are celebrity. Consequently, celebrity cultures are necessarily plural. This approach enables us to focus on the spatial differentiation between (sub)national celebrity cultures, for which the Flemish case is taken as a central example. We gain a better understanding of this differentiation by adopting a translocal frame on culture and by focusing on the construction of celebrity cultures through the ‘us and them’ binary and communities. Finally, it is also suggested that what is termed cultural working memory improves our understanding of the remembering and forgetting of actual celebrities, as opposed to more historical figures captured by concepts such as cultural memory.",
"title": ""
},
{
"docid": "ab98f6dc31d080abdb06bb9b4dba798e",
"text": "In TEFL, it is often stated that communication presupposes comprehension. The main purpose of readability studies is thus to measure the comprehensibility of a piece of writing. In this regard, different readability measures were initially devised to help educators select passages suitable for both children and adults. However, readability formulas can certainly be extremely helpful in the realm of EFL reading. They were originally designed to assess the suitability of books for students at particular grade levels or ages. Nevertheless, they can be used as basic tools in determining certain crucial EFL text-characteristics instrumental in the skill of reading and its related issues. The aim of the present paper is to familiarize the readers with the most frequently used readability formulas as well as the pros and cons views toward the use of such formulas. Of course, this part mostly illustrates studies done on readability formulas with the results obtained. The main objective of this part is to help readers to become familiar with the background of the formulas, the theory on which they stand, what they are good for and what they are not with regard to a number of studies cited in this section.",
"title": ""
}
] |
scidocsrr
|
4bdd8803192ea4cb8b47adefd6e45054
|
On-Line Mobile Robot Model Identification Using Integrated Perturbative Dynamics
|
[
{
"docid": "14827ea435d82e4bfe481713af45afed",
"text": "This paper introduces a model-based approach to estimating longitudinal wheel slip and detecting immobilized conditions of autonomous mobile robots operating on outdoor terrain. A novel tire traction/braking model is presented and used to calculate vehicle dynamic forces in an extended Kalman filter framework. Estimates of external forces and robot velocity are derived using measurements from wheel encoders, inertial measurement unit, and GPS. Weak constraints are used to constrain the evolution of the resistive force estimate based upon physical reasoning. Experimental results show the technique accurately and rapidly detects robot immobilization conditions while providing estimates of the robot's velocity during normal driving. Immobilization detection is shown to be robust to uncertainty in tire model parameters. Accurate immobilization detection is demonstrated in the absence of GPS, indicating the algorithm is applicable for both terrestrial applications and space robotics.",
"title": ""
}
] |
[
{
"docid": "c5b8c7fa8518595196aa48740578cb05",
"text": "Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5× less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models.",
"title": ""
},
{
"docid": "633cce3860a44e5931d93dc3e83f14f4",
"text": "The main theme of this paper is to present a new digital-controlled technique for battery charger to achieve constant current and voltage control while not requiring current feedback. The basic idea is to achieve constant current charging control by limiting the duty cycle of charger. Therefore, the current feedback signal is not required and thereby reducing the cost of A/D converter, current sensor, and computation complexity required for current control. Moreover, when the battery voltage is increased to the preset voltage level using constant current charge, the charger changes the control mode to constant voltage charge. A digital-controlled charger is designed and implemented for uninterrupted power supply (UPS) applications. The charger control is based upon the proposed control method in software. As a result, the UPS control, including boost converter, charger, and inverter control can be realized using only one low cost MCU. Experimental results demonstrate that the effectiveness of the design and implementation.",
"title": ""
},
{
"docid": "95f9547a510ca82b283c59560b5a93c6",
"text": "Human action recognition in videos is one of the most challenging tasks in computer vision. One important issue is how to design discriminative features for representing spatial context and temporal dynamics. Here, we introduce a path signature feature to encode information from intra-frame and inter-frame contexts. A key step towards leveraging this feature is to construct the proper trajectories (paths) for the data steam. In each frame, the correlated constraints of human joints are treated as small paths, then the spatial path signature features are extracted from them. In video data, the evolution of these spatial features over time can also be regarded as paths from which the temporal path signature features are extracted. Eventually, all these features are concatenated to constitute the input vector of a fully connected neural network for action classification. Experimental results on four standard benchmark action datasets, J-HMDB, SBU Dataset, Berkeley MHAD, and NTURGB+D demonstrate that the proposed approach achieves state-of-the-art accuracy even in comparison with recent deep learning based models.",
"title": ""
},
{
"docid": "a212a2969c0c72894dcde880bbf29fa7",
"text": "Machine learning is useful for building robust learning models, and it is based on a set of features that identify a state of an object. Unfortunately, some data sets may contain a large number of features making, in some cases, the learning process time consuming and the generalization capability of machine learning poor. To make a data set easy to learn and understand, it is typically recommended to remove the most irrelevant features from the set. However, choosing what data should be kept or eliminated may be performed by complex selection algorithms, and optimal feature selection may require an exhaustive search of all possible subsets of features which is computationally expensive. This paper proposes a simple method to perform feature selection using artificial neural networks. It is shown experimentally that genetic algorithms in combination with artificial neural networks can easily be used to extract those features that are required to produce a desired result. Experimental results show that very few hidden neurons are required for feature selection as artificial neural networks are only used to assess the quality of an individual, which is a chosen subset of features.",
"title": ""
},
{
"docid": "1f2832276b346316b15fe05d8593217c",
"text": "This paper presents a new method for generating inductive loop invariants that are expressible as boolean combinations of linear integer constraints. The key idea underlying our technique is to perform a backtracking search that combines Hoare-style verification condition generation with a logical abduction procedure based on quantifier elimination to speculate candidate invariants. Starting with true, our method iteratively strengthens loop invariants until they are inductive and strong enough to verify the program. A key feature of our technique is that it is lazy: It only infers those invariants that are necessary for verifying program correctness. Furthermore, our technique can infer arbitrary boolean combinations (including disjunctions) of linear invariants. We have implemented the proposed approach in a tool called HOLA. Our experiments demonstrate that HOLA can infer interesting invariants that are beyond the reach of existing state-of-the-art invariant generation tools.",
"title": ""
},
{
"docid": "a411780d406e8b720303d18cd6c9df68",
"text": "Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.",
"title": ""
},
{
"docid": "d462883de69e86cec8631d195a8a064d",
"text": "Micro Unmanned Aerial Vehicles (UAVs) such as quadrocopters have gained great popularity over the last years, both as a research platform and in various application fields. However, some complex application scenarios call for the formation of swarms consisting of multiple drones. In this paper a platform for the creation of such swarms is presented. It is based on commercially available quadrocopters enhanced with on-board processing and communication units enabling full autonomy of individual drones. Furthermore, a generic ground control station is presented that serves as integration platform. It allows the seamless coordination of different kinds of sensor platforms.",
"title": ""
},
{
"docid": "100ab34e96da2b8640bd97467e9c91e1",
"text": "Manual work is taken over the robot technology and many of the related robot appliances are being used extensively also. Here represents the technology that proposed the working of robot for Floor cleaning. This floor cleaner robot can work in any of two modes i.e. “Automatic and Manual”. All hardware and software operations are controlled by AT89S52 microcontroller. This robot can perform sweeping and mopping task. RF modules have been used for wireless communication between remote (manual mode) and robot and having range 50m. This robot is incorporated with IR sensor for obstacle detection and automatic water sprayer pump. Four motors are used, two for cleaning, one for water pump and one for wheels. Dual relay circuit used to drive the motors one for water pump and another for cleaner. In previous work, there was no automatic water sprayer used and works only in automatic mode. In the automatic mode robot control all the operations itself and change the lane in case of hurdle detection and moves back. In the manual mode, the keypad is used to perform the expected task and to operate robot. In manual mode, RF module has been used to transmit and receive the information between remote and robot and display the information related to the hurdle detection on LCD. The whole circuitry is connected with 12V battery.",
"title": ""
},
{
"docid": "9e32991f47d2d480ed35e488b85dfb79",
"text": "Convolutional Neural Networks (CNNs) are powerful models that achieve impressive results for image classification. In addition, pre-trained CNNs are also useful for other computer vision tasks as generic feature extractors [1]. This paper aims to gain insight into the feature aspect of CNN and demonstrate other uses of CNN features. Our results show that CNN feature maps can be used with Random Forests and SVM to yield classification results that outperforms the original CNN. A CNN that is less than optimal (e.g. not fully trained or overfitting) can also extract features for Random Forest/SVM that yield competitive classification accuracy. In contrast to the literature which uses the top-layer activations as feature representation of images for other tasks [1], using lower-layer features can yield better results for classification.",
"title": ""
},
{
"docid": "d752bf764e4518cee561b11146d951c4",
"text": "Speech recognition is an increasingly important input modality, especially for mobile computing. Because errors are unavoidable in real applications, efficient correction methods can greatly enhance the user experience. In this paper we study a reranking and classification strategy for choosing word alternates to display to the user in the framework of a tap-to-correct interface. By employing a logistic regression model to estimate the probability that an alternate will offer a useful correction to the user, we can significantly reduce the average length of the alternates lists generated with no reduction in the number of words they are able to correct.",
"title": ""
},
{
"docid": "edd78912d764ab33e0e1a8124bc7d709",
"text": "Natural language understanding and dialogue policy learning are both essential in conversational systems that predict the next system actions in response to a current user utterance. Conventional approaches aggregate separate models of natural language understanding (NLU) and system action prediction (SAP) as a pipeline that is sensitive to noisy outputs of error-prone NLU. To address the issues, we propose an end-to-end deep recurrent neural network with limited contextual dialogue memory by jointly training NLU and SAP on DSTC4 multi-domain human-human dialogues. Experiments show that our proposed model significantly outperforms the state-of-the-art pipeline models for both NLU and SAP, which indicates that our joint model is capable of mitigating the affects of noisy NLU outputs, and NLU model can be refined by error flows backpropagating from the extra supervised signals of system actions.",
"title": ""
},
{
"docid": "fe194d04f5bb78c5fa40e93fc6046b42",
"text": "Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, EnglishFrench and Chinese-to-English translation tasks.",
"title": ""
},
{
"docid": "75228d9fd5255ecb753ee3b465640d97",
"text": "To pave the way towards disclosing the full potential of 5G networking, emerging Mobile Edge Computing techniques are gaining momentum in both academic and industrial research as a means to enhance infrastructure scalability and reliability by moving control functions close to the edge of the network. After the promising results under achievement within the EU Mobile Cloud Networking project, we claim the suitability of deploying Evolved Packet Core (EPC) support solutions as a Service (EPCaaS) over a uniform edge cloud infrastructure of Edge Nodes, by following the concepts of Network Function Virtualization (NFV). This paper originally focuses on the support needed for efficient elasticity provisioning of EPCaaS stateful components, by proposing novel solutions for effective subscribers' state management in quality-constrained 5G scenarios. In particular, to favor flexibility and high-availability against network function failures, we have developed a state sharing mechanism across different data centers even in presence of firewall/network encapsulation. In addition, our solution can dynamically select which state portions should be shared and to which Edge Nodes. The reported experimental results, measured over the widely recognized Open5GCore testbed, demonstrate the feasibility and effectiveness of the approach, as well as its capability to satisfy \"carrier-grade\" quality requirements while ensuring good elasticity and scalability.",
"title": ""
},
{
"docid": "0c7eff3e7c961defce07b98914431414",
"text": "The navigational system of the mammalian cortex comprises a number of interacting brain regions. Grid cells in the medial entorhinal cortex and place cells in the hippocampus are thought to participate in the formation of a dynamic representation of the animal's current location, and these cells are presumably critical for storing the representation in memory. To traverse the environment, animals must be able to translate coordinate information from spatial maps in the entorhinal cortex and hippocampus into body-centered representations that can be used to direct locomotion. How this is done remains an enigma. We propose that the posterior parietal cortex is critical for this transformation.",
"title": ""
},
{
"docid": "cfa036aa6eb15b3634fae9a2f3f137da",
"text": "We present a high-efficiency transmitter based on asymmetric multilevel outphasing (AMO). AMO transmitters improve their efficiency over LINC (linear amplification using nonlinear components) transmitters by switching the output envelopes of the power amplifiers among a discrete set of levels. This minimizes the occurrence of large outphasing angles, reducing the energy lost in the power combiner. We demonstrate this concept with a 2.5-GHz, 20-dBm peak output power transmitter using 2-level AMO designed in a 65-nm CMOS process. To the authors' knowledge, this IC is the first integrated implementation of the AMO concept. At peak output power, the measured power-added efficiency is 27.8%. For a 16-QAM signal with 6.1dB peak-to-average power ratio, the AMO prototype improves the average efficiency from 4.7% to 10.0% compared to the standard LINC system.",
"title": ""
},
{
"docid": "af486334ab8cae89d9d8c1c17526d478",
"text": "Notifications are a core feature of mobile phones. They inform users about a variety of events. Users may take immediate action or ignore them depending on the importance of a notification as well as their current context. The nature of notifications is manifold, applications use them both sparsely and frequently. In this paper we present the first large-scale analysis of mobile notifications with a focus on users' subjective perceptions. We derive a holistic picture of notifications on mobile phones by collecting close to 200 million notifications from more than 40,000 users. Using a data-driven approach, we break down what users like and dislike about notifications. Our results reveal differences in importance of notifications and how users value notifications from messaging apps as well as notifications that include information about people and events. Based on these results we derive a number of findings about the nature of notifications and guidelines to effectively use them.",
"title": ""
},
{
"docid": "af1f047dca3a4d7cbd75c84e5d8d1552",
"text": "UNLABELLED\nAcupuncture is a therapeutic treatment that is defined as the insertion of needles into the body at specific points (ie, acupoints). Advances in functional neuroimaging have made it possible to study brain responses to acupuncture; however, previous studies have mainly concentrated on acupoint specificity. We wanted to focus on the functional brain responses that occur because of needle insertion into the body. An activation likelihood estimation meta-analysis was carried out to investigate common characteristics of brain responses to acupuncture needle stimulation compared to tactile stimulation. A total of 28 functional magnetic resonance imaging studies, which consisted of 51 acupuncture and 10 tactile stimulation experiments, were selected for the meta-analysis. Following acupuncture needle stimulation, activation in the sensorimotor cortical network, including the insula, thalamus, anterior cingulate cortex, and primary and secondary somatosensory cortices, and deactivation in the limbic-paralimbic neocortical network, including the medial prefrontal cortex, caudate, amygdala, posterior cingulate cortex, and parahippocampus, were detected and assessed. Following control tactile stimulation, weaker patterns of brain responses were detected in areas similar to those stated above. The activation and deactivation patterns following acupuncture stimulation suggest that the hemodynamic responses in the brain simultaneously reflect the sensory, cognitive, and affective dimensions of pain.\n\n\nPERSPECTIVE\nThis article facilitates a better understanding of acupuncture needle stimulation and its effects on specific activity changes in different brain regions as well as its relationship to the multiple dimensions of pain. Future studies can build on this meta-analysis and will help to elucidate the clinically relevant therapeutic effects of acupuncture.",
"title": ""
},
{
"docid": "feeb5741fae619a37f44eae46169e9d1",
"text": "A 24-GHz novel active quasi-circulator is developed in TSMC 0.18-µm CMOS. We proposed a new architecture by using the canceling mechanism to achieve high isolations and reduce the circuit area. The measured insertion losses |S<inf>32</inf>| and |S<inf>21</inf>| are 9 and 8.5 dB, respectively. The isolation |S<inf>31</inf>| is greater than 30 dB. The dc power consumption is only 9.12 mW with a chip size of 0.35 mm<sup>2</sup>.",
"title": ""
},
{
"docid": "bf6d56c2fd716802b8e2d023f86a4225",
"text": "This is the first case report to demonstrate the efficacy of immersive computer-generated virtual reality (VR) and mixed reality (touching real objects which patients also saw in VR) for the treatment of spider phobia. The subject was a 37-yr-old female with severe and incapacitating fear of spiders. Twelve weekly 1-hr sessions were conducted over a 3-month period. Outcome was assessed on measures of anxiety, avoidance, and changes in behavior toward real spiders. VR graded exposure therapy was successful for reducing fear of spiders providing converging evidence for a growing literature showing the effectiveness of VR as a new medium for exposure therapy.",
"title": ""
}
] |
scidocsrr
|
84790d91d8203ad05ae357fd02c89496
|
DETECTING LASER SPOT IN SHOOTING SIMULATOR USING AN EMBEDDED CAMERA
|
[
{
"docid": "16880162165f4c95d6b01dc4cfc40543",
"text": "In this paper we present CMUcam3, a low-cost, open source, em bedded computer vision platform. The CMUcam3 is the third generation o f the CMUcam system and is designed to provide a flexible and easy to use ope n source development environment along with a more powerful hardware platfo rm. The goal of the system is to provide simple vision capabilities to small emb dded systems in the form of an intelligent sensor that is supported by an open sou rce community. The hardware platform consists of a color CMOS camera, a frame bu ff r, a low cost 32bit ARM7TDMI microcontroller, and an MMC memory card slot. T he CMUcam3 also includes 4 servo ports, enabling one to create entire, w orking robots using the CMUcam3 board as the only requisite robot processor. Cus tom C code can be developed using an optimized GNU toolchain and executabl es can be flashed onto the board using a serial port without external download ing hardware. The development platform includes a virtual camera target allowi ng for rapid application development exclusively on a PC. The software environment c omes with numerous open source example applications and libraries includi ng JPEG compression, frame differencing, color tracking, convolutions, histog ramming, edge detection, servo control, connected component analysis, FAT file syste m upport, and a face detector.",
"title": ""
},
{
"docid": "4acbb4e7de6daec331c8ff8672fa7447",
"text": "This paper describes a machine vision system with back lighting illumination and friendly man-machine interface. Subtraction is used to segment target holes quickly and accurately. The oval obtained after tracing boundary is processed by Generalized Hough Transform to acquire the target's center. Marked-hole's area, perimeter and moment invariants are extracted as cluster features. The auto-scoring software, programmed by Visual C++, has successfully solved the recognition of off-target and overlapped holes through alarming surveillance and bullet tacking programs. The experimental results show that, when the target is distorted obviously, the system can recognize the overlapped holes on real time and also clusters random shape holes on the target correctly. The high accuracy, fast computing speed, easy debugging and low cost make the system can be widely used.",
"title": ""
}
] |
[
{
"docid": "77a7e6233e41ce9fc8d1db2e85ee0563",
"text": "We show how an ensemble of Q-functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the Q-learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant gains on the Atari benchmark.",
"title": ""
},
{
"docid": "c6a519ce49dc7b5776afe8035f79fc73",
"text": "For 100 years, there has been no change in the basic structure of the electrical power grid. Experiences have shown that the hierarchical, centrally controlled grid of the 20th Century is ill-suited to the needs of the 21st Century. To address the challenges of the existing power grid, the new concept of smart grid has emerged. The smart grid can be considered as a modern electric power grid infrastructure for enhanced efficiency and reliability through automated control, high-power converters, modern communications infrastructure, sensing and metering technologies, and modern energy management techniques based on the optimization of demand, energy and network availability, and so on. While current power systems are based on a solid information and communication infrastructure, the new smart grid needs a different and much more complex one, as its dimension is much larger. This paper addresses critical issues on smart grid technologies primarily in terms of information and communication technology (ICT) issues and opportunities. The main objective of this paper is to provide a contemporary look at the current state of the art in smart grid communications as well as to discuss the still-open research issues in this field. It is expected that this paper will provide a better understanding of the technologies, potential advantages and research challenges of the smart grid and provoke interest among the research community to further explore this promising research area.",
"title": ""
},
{
"docid": "2f7990443281ed98189abb65a23b0838",
"text": "In recent years, there has been a tendency to correlate the origin of modern culture and language with that of anatomically modern humans. Here we discuss this correlation in the light of results provided by our first hand analysis of ancient and recently discovered relevant archaeological and paleontological material from Africa and Europe. We focus in particular on the evolutionary significance of lithic and bone technology, the emergence of symbolism, Neandertal behavioral patterns, the identification of early mortuary practices, the anatomical evidence for the acquisition of language, the",
"title": ""
},
{
"docid": "db34e0317dc78ac7cfedb66619f9d300",
"text": "Most research efforts on image classification so far have been focused on medium-scale datasets, which are often defined as datasets that can fit into the memory of a desktop (typically 4G∼48G). There are two main reasons for the limited effort on large-scale image classification. First, until the emergence of ImageNet dataset, there was almost no publicly available large-scale benchmark data for image classification. This is mostly because class labels are expensive to obtain. Second, large-scale classification is hard because it poses more challenges than its medium-scale counterparts. A key challenge is how to achieve efficiency in both feature extraction and classifier training without compromising performance. This paper is to show how we address this challenge using ImageNet dataset as an example. For feature extraction, we develop a Hadoop scheme that performs feature extraction in parallel using hundreds of mappers. This allows us to extract fairly sophisticated features (with dimensions being hundreds of thousands) on 1.2 million images within one day. For SVM training, we develop a parallel averaging stochastic gradient descent (ASGD) algorithm for training one-against-all 1000-class SVM classifiers. The ASGD algorithm is capable of dealing with terabytes of training data and converges very fast–typically 5 epochs are sufficient. As a result, we achieve state-of-the-art performance on the ImageNet 1000-class classification, i.e., 52.9% in classification accuracy and 71.8% in top 5 hit rate.",
"title": ""
},
{
"docid": "e672d12d5e0163fae74639ca0384a131",
"text": "The greater sophistication and complexity of machines increases the necessity to equip them with human friendly interfaces. As we know, voice is the main support for human-human communication, so it is desirable to interact with machines, namely robots, using voice. In this paper we present the recent evolution of the Natural Language Understanding capabilities of Carl, our mobile intelligent robot capable of interacting with humans using spoken natural language. The new design is based on a hybrid approach, combining a robust parser with Memory Based Learning. This hybrid architecture is capable of performing deep analysis if the sentence is (almost) completely accepted by the grammar, and capable of performing a shallow analysis if the sentence has severe errors.",
"title": ""
},
{
"docid": "1acea5d872937a8929a174916f53303d",
"text": "The pattern of muscle glycogen synthesis following glycogen-depleting exercise occurs in two phases. Initially, there is a period of rapid synthesis of muscle glycogen that does not require the presence of insulin and lasts about 30-60 minutes. This rapid phase of muscle glycogen synthesis is characterised by an exercise-induced translocation of glucose transporter carrier protein-4 to the cell surface, leading to an increased permeability of the muscle membrane to glucose. Following this rapid phase of glycogen synthesis, muscle glycogen synthesis occurs at a much slower rate and this phase can last for several hours. Both muscle contraction and insulin have been shown to increase the activity of glycogen synthase, the rate-limiting enzyme in glycogen synthesis. Furthermore, it has been shown that muscle glycogen concentration is a potent regulator of glycogen synthase. Low muscle glycogen concentrations following exercise are associated with an increased rate of glucose transport and an increased capacity to convert glucose into glycogen. The highest muscle glycogen synthesis rates have been reported when large amounts of carbohydrate (1.0-1.85 g/kg/h) are consumed immediately post-exercise and at 15-60 minute intervals thereafter, for up to 5 hours post-exercise. When carbohydrate ingestion is delayed by several hours, this may lead to ~50% lower rates of muscle glycogen synthesis. The addition of certain amino acids and/or proteins to a carbohydrate supplement can increase muscle glycogen synthesis rates, most probably because of an enhanced insulin response. However, when carbohydrate intake is high (> or =1.2 g/kg/h) and provided at regular intervals, a further increase in insulin concentrations by additional supplementation of protein and/or amino acids does not further increase the rate of muscle glycogen synthesis. Thus, when carbohydrate intake is insufficient (<1.2 g/kg/h), the addition of certain amino acids and/or proteins may be beneficial for muscle glycogen synthesis. Furthermore, ingestion of insulinotropic protein and/or amino acid mixtures might stimulate post-exercise net muscle protein anabolism. Suggestions have been made that carbohydrate availability is the main limiting factor for glycogen synthesis. A large part of the ingested glucose that enters the bloodstream appears to be extracted by tissues other than the exercise muscle (i.e. liver, other muscle groups or fat tissue) and may therefore limit the amount of glucose available to maximise muscle glycogen synthesis rates. Furthermore, intestinal glucose absorption may also be a rate-limiting factor for muscle glycogen synthesis when large quantities (>1 g/min) of glucose are ingested following exercise.",
"title": ""
},
{
"docid": "6f70b6071c945ca22edda9e2b8fe22a8",
"text": "BACKGROUND\nHyaluronidase (Hylase Dessau(®)) is a hyaluronic acid-metabolizing enzyme, which has been shown to loosen the extracellular matrix, thereby improving the diffusion of local anesthetics. Lower eyelid edema is a common post-interventional complication of cosmetic procedures performed in the lid region, such as the injection of hyaluronic acid fillers for tear-trough augmentation. The purpose of this study was to validate the efficacy of hyaluronidase in the management of lower eyelid edema.\n\n\nMETHODS\nWe performed a retrospective analysis with 20 patients with lower eyelid edema. Most patients (n = 14) presented with edema following hyaluronic acid injection (tear-trough augmentation), whereas the minority (n = 6) were treated due to idiopathic edema (malar edema or malar mounds). Patients were treated by local infiltration of approximately 0.2 ml to 0.5 ml of hyaluronidase (Hylase Dessau(®) 20 IU to 75 IU) per eyelid. Photographs were taken prior to and seven days after infiltration.\n\n\nRESULTS\nHyaluronidase was found to reduce effectively and rapidly or resolve eyelid edema after a single injection. No relevant adverse effects were observed. However, it must be noted that a hyaluronidase injection may also dissolve injected hyaluronic acid fillers and may therefore negatively affect tear-trough augmentations. While the effects of a treatment for edema due to tear-trough augmentation were permanent, malar edema and malar mounds reoccurred within two to three weeks.\n\n\nCONCLUSION\nThe infiltration of hyaluronidase is rapid, safe and currently the only effective option for the management of eyelid edema. No relevant adverse effects were observed.",
"title": ""
},
{
"docid": "7209596ad58da21211bfe0ceaaccc72b",
"text": "Knowledge tracing (KT)[1] has been used in various forms for adaptive computerized instruction for more than 40 years. However, despite its long history of application, it is difficult to use in domain model search procedures, has not been used to capture learning where multiple skills are needed to perform a single action, and has not been used to compute latencies of actions. On the other hand, existing models used for educational data mining (e.g. Learning Factors Analysis (LFA)[2]) and model search do not tend to allow the creation of a “model overlay” that traces predictions for individual students with individual skills so as to allow the adaptive instruction to automatically remediate performance. Because these limitations make the transition from model search to model application in adaptive instruction more difficult, this paper describes our work to modify an existing data mining model so that it can also be used to select practice adaptively. We compare this new adaptive data mining model (PFA, Performance Factors Analysis) with two versions of LFA and then compare PFA with standard KT.",
"title": ""
},
{
"docid": "acfe7531f67a40e27390575a69dcd165",
"text": "This paper reviews the relationship between attention deficit hyperactivity disorder (ADHD) and academic performance. First, the relationship at different developmental stages is examined, focusing on pre-schoolers, children, adolescents and adults. Second, the review examines the factors underpinning the relationship between ADHD and academic underperformance: the literature suggests that it is the symptoms of ADHD and underlying cognitive deficits not co-morbid conduct problems that are at the root of academic impairment. The review concludes with an overview of the literature examining strategies that are directed towards remediating the academic impairment of individuals with ADHD.",
"title": ""
},
{
"docid": "ebb01a778c668ef7b439875eaa5682ac",
"text": "In this paper, we present a large scale off-line handwritten Chinese character database-HCL2000 which will be made public available for the research community. The database contains 3,755 frequently used simplified Chinesecharacters written by 1,000 different subjects. The writers’ information is incorporated in the database to facilitate testing on grouping writers with different background such as age, occupation, gender, and education etc. We investigate some characteristics of writing styles from different groups of writers. We evaluate HCL2000 database using three different algorithms as a baseline. We decide to publish the database along with this paper and make it free for a research purpose.",
"title": ""
},
{
"docid": "0d7ce42011c48232189c791e71c289f5",
"text": "RECENT WORK in virtue ethics, particularly sustained reflection on specific virtues, makes it possible to argue that the classical list of cardinal virtues (prudence, justice, temperance, and fortitude) is inadequate, and that we need to articulate the cardinal virtues more correctly. With that end in view, the first section of this article describes the challenges of espousing cardinal virtues today, the second considers the inadequacy of the classical listing of cardinal virtues, and the third makes a proposal. Since virtues, no matter how general, should always relate to concrete living, the article is framed by a case.",
"title": ""
},
{
"docid": "c6d25017a6cba404922933672a18d08a",
"text": "The Internet of Things (IoT) makes smart objects the ultimate building blocks in the development of cyber-physical smart pervasive frameworks. The IoT has a variety of application domains, including health care. The IoT revolution is redesigning modern health care with promising technological, economic, and social prospects. This paper surveys advances in IoT-based health care technologies and reviews the state-of-the-art network architectures/platforms, applications, and industrial trends in IoT-based health care solutions. In addition, this paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attack taxonomies from the health care perspective. Further, this paper proposes an intelligent collaborative security model to minimize security risk; discusses how different innovations such as big data, ambient intelligence, and wearables can be leveraged in a health care context; addresses various IoT and eHealth policies and regulations across the world to determine how they can facilitate economies and societies in terms of sustainable development; and provides some avenues for future research on IoT-based health care based on a set of open issues and challenges.",
"title": ""
},
{
"docid": "e9402a771cc761e7e6484c2be6bc2cce",
"text": "In this work, we present the Text Conditioned Auxiliary Classifier Generative Adversarial Network, (TAC-GAN) a text to image Generative Adversarial Network (GAN) for synthesizing images from their text descriptions. Former approaches have tried to condition the generative process on the textual data; but allying it to the usage of class information, known to diversify the generated samples and improve their structural coherence, has not been explored. We trained the presented TAC-GAN model on the Oxford102 dataset of flowers, and evaluated the discriminability of the generated images with Inception-Score, as well as their diversity using the Multi-Scale Structural Similarity Index (MS-SSIM). Our approach outperforms the stateof-the-art models, i.e., its inception score is 3.45, corresponding to a relative increase of 7.8% compared to the recently introduced StackGan. A comparison of the mean MS-SSIM scores of the training and generated samples per class shows that our approach is able to generate highly diverse images with an average MS-SSIM of 0.14 over all generated classes.",
"title": ""
},
{
"docid": "c8cd0c0ebd38b3e287d6e6eed965db6b",
"text": "Goalball, one of the official Paralympic events, is popular with visually impaired people all over the world. The purpose of goalball is to throw the specialized ball, with bells inside it, to the goal line of the opponents as many times as possible while defenders try to block the thrown ball with their bodies. Since goalball players cannot rely on visual information, they need to grasp the game situation using their auditory sense. However, it is hard, especially for beginners, to perceive the direction and distance of the thrown ball. In addition, they generally tend to be afraid of the approaching ball because, without visual information, they could be hit by a high-speed ball. In this paper, our goal is to develop an application called GoalBaural (Goalball + aural) that enables goalball players to improve the recognizability of the direction and distance of a thrown ball without going onto the court and playing goalball. The evaluation result indicated that our application would be efficient in improving the speed and the accuracy of locating the balls.",
"title": ""
},
{
"docid": "78921cbdbc80f714598d8fb9ae750c7e",
"text": "Duplicates in data management are common and problematic. In this work, we present a translation of Datalog under bag semantics into a well-behaved extension of Datalog, the so-called warded Datalog±, under set semantics. From a theoretical point of view, this allows us to reason on bag semantics by making use of the well-established theoretical foundations of set semantics. From a practical point of view, this allows us to handle the bag semantics of Datalog by powerful, existing query engines for the required extension of Datalog. This use of Datalog± is extended to give a set semantics to duplicates in Datalog± itself. We investigate the properties of the resulting Datalog± programs, the problem of deciding multiplicities, and expressibility of some bag operations. Moreover, the proposed translation has the potential for interesting applications such as to Multiset Relational Algebra and the semantic web query language SPARQL with bag semantics. 2012 ACM Subject Classification Information systems → Query languages; Theory of computation → Logic; Theory of computation → Semantics and reasoning",
"title": ""
},
{
"docid": "d2a04795fa95d2534b000dbf211cd4b9",
"text": "Tracking multiple targets is a challenging problem, especially when the targets are “identical”, in the sense that the same model is used to describe each target. In this case, simply instantiating several independent 1-body trackers is not an adequate solution, because the independent trackers tend to coalesce onto the best-fitting target. This paper presents an observation density for tracking which solves this problem by exhibiting a probabilistic exclusion principle. Exclusion arises naturally from a systematic derivation of the observation density, without relying on heuristics. Another important contribution of the paper is the presentation of partitioned sampling, a new sampling method for multiple object tracking. Partitioned sampling avoids the high computational load associated with fully coupled trackers, while retaining the desirable properties of coupling.",
"title": ""
},
{
"docid": "e1404d2926f51455690883caf01fb2f9",
"text": "The integration of data produced and collected across autonomous, heterogeneous web services is an increasingly important and challenging problem. Due to the lack of global identifiers, the same entity (e.g., a product) might have different textual representations across databases. Textual data is also often noisy because of transcription errors, incomplete information, and lack of standard formats. A fundamental task during data integration is matching of strings that refer to the same entity. In this paper, we adopt the widely used and established cosine similarity metric from the information retrieval field in order to identify potential string matches across web sources. We then use this similarity metric to characterize this key aspect of data integration as a join between relations on textual attributes, where the similarity of matches exceeds a specified threshold. Computing an exact answer to the text join can be expensive. For query processing efficiency, we propose a sampling-based join approximation strategy for execution in a standard, unmodified relational database management system (RDBMS), since more and more web sites are powered by RDBMSs with a web-based front end. We implement the join inside an RDBMS, using SQL queries, for scalability and robustness reasons. Finally, we present a detailed performance evaluation of an implementation of our algorithm within a commercial RDBMS, using real-life data sets. Our experimental results demonstrate the efficiency and accuracy of our techniques.",
"title": ""
},
{
"docid": "fdefbb2ed3185eadb4657879d9776d34",
"text": "Convenient monitoring of vital signs, particularly blood pressure(BP), is critical to improve the effectiveness of health-care and prevent chronic diseases. This study presents a user-friendly, low-cost, real-time, and non-contact technique for BP measurement based on the detection of photoplethysmography (PPG) using a regular webcam. Leveraging features extracted from photoplethysmograph, an individual's BP can be estimated using a neural network. Experiments were performed on 20 human participants during three different daytime slots given the influence of background illumination. Compared against the systolic blood pressure and diastolic blood pressure readings collected from a commercially available BP monitor, the proposed technique achieves an average error rate of 9.62% (Systolic BP) and 11.63% (Diastolic BP) for the afternoon session, and 8.4% (Systolic BP) and 11.18% (Diastolic BP) for the evening session. The proposed technique can be easily extended to the camera on any mobile device and thus be widely used in a pervasive manner.",
"title": ""
},
{
"docid": "bda980d41e0b64ec7ec41502cada6e7f",
"text": "In this paper, we address semantic parsing in a multilingual context. We train one multilingual model that is capable of parsing natural language sentences from multiple different languages into their corresponding formal semantic representations. We extend an existing sequence-to-tree model to a multi-task learning framework which shares the decoder for generating semantic representations. We report evaluation results on the multilingual GeoQuery corpus and introduce a new multilingual version of the ATIS corpus.",
"title": ""
},
{
"docid": "f38854d7c788815d8bc6d20db284e238",
"text": "This paper presents the development of a Sinhala Speech Recognition System to be deployed in an Interactive Voice Response (IVR) system of a telecommunication service provider. The main objectives are to recognize Sinhala digits and names of Sinhala songs to be set up as ringback tones. Sinhala being a phonetic language, its features are studied to develop a list of 47 phonemes. A continuous speech recognition system is developed based on Hidden Markov Model (HMM). The acoustic model is trained using the voice through mobile phone. The outcome is a speaker independent speech recognition system which is capable of recognizing 10 digits and 50 Sinhala songs. A word error rate (WER) of 11.2% using a speech corpus of 0.862 hours and a sentence error rate (SER) of 5.7% using a speech corpus of 1.388 hours are achieved for digits and songs respectively.",
"title": ""
}
] |
scidocsrr
|
31676b77fc40d569e619caec0dd4fc17
|
A Pan-Cancer Proteogenomic Atlas of PI3K/AKT/mTOR Pathway Alterations.
|
[
{
"docid": "99ff0acb6d1468936ae1620bc26c205f",
"text": "The Cancer Genome Atlas (TCGA) has used the latest sequencing and analysis methods to identify somatic variants across thousands of tumours. Here we present data and analytical results for point mutations and small insertions/deletions from 3,281 tumours across 12 tumour types as part of the TCGA Pan-Cancer effort. We illustrate the distributions of mutation frequencies, types and contexts across tumour types, and establish their links to tissues of origin, environmental/carcinogen influences, and DNA repair defects. Using the integrated data sets, we identified 127 significantly mutated genes from well-known (for example, mitogen-activated protein kinase, phosphatidylinositol-3-OH kinase, Wnt/β-catenin and receptor tyrosine kinase signalling pathways, and cell cycle control) and emerging (for example, histone, histone modification, splicing, metabolism and proteolysis) cellular processes in cancer. The average number of mutations in these significantly mutated genes varies across tumour types; most tumours have two to six, indicating that the number of driver mutations required during oncogenesis is relatively small. Mutations in transcriptional factors/regulators show tissue specificity, whereas histone modifiers are often mutated across several cancer types. Clinical association analysis identifies genes having a significant effect on survival, and investigations of mutations with respect to clonal/subclonal architecture delineate their temporal orders during tumorigenesis. Taken together, these results lay the groundwork for developing new diagnostics and individualizing cancer treatment.",
"title": ""
}
] |
[
{
"docid": "6d00686ad4d2d589a415d810b2fcc876",
"text": "The accuracy of voice activity detection (VAD) is one of the most important factors which influence the capability of the speech recognition system, how to detect the endpoint precisely in noise environment is still a difficult task. In this paper, we proposed a new VAD method based on Mel-frequency cepstral coefficients (MFCC) similarity. We first extracts the MFCC of a voice signal for each frame, followed by calculating the MFCC Euclidean distance and MFCC correlation coefficient of the test frame and the background noise, Finally, give the experimental results. The results show that at low SNR circumstance, MFCC similarity detection method is better than traditional short-term energy method. Compared with Euclidean distance measure method, correlation coefficient is better.",
"title": ""
},
{
"docid": "070d23b78d7808a19bde68f0ccdd7587",
"text": "Deep learning is playing a more and more important role in our daily life and scientific research such as autonomous systems, intelligent life and data mining. However, numerous studies have showed that deep learning with superior performance on many tasks may suffer from subtle perturbations constructed by attacker purposely, called adversarial perturbations, which are imperceptible to human observers but completely effect deep neural network models. The emergence of adversarial attacks has led to questions about neural networks. Therefore, machine learning security and privacy are becoming an increasingly active research area. In this paper, we summarize the prevalent methods for the generating adversarial attacks according to three groups. We elaborated on their ideas and principles of generation. We further analyze the common limitations of these methods and implement statistical experiments of the last layer output on CleverHans to reveal that the detection of adversarial samples is not as difficult as it seems and can be achieved in some relatively simple manners.",
"title": ""
},
{
"docid": "e21aed852a892cbede0a31ad84d50a65",
"text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.09.010 ⇑ Corresponding author. Tel.: +1 662 915 5519. E-mail addresses: [email protected] (C. R (D. Gamboa), [email protected] (F. Glover), [email protected] (C. Osterman). Heuristics for the traveling salesman problem (TSP) have made remarkable advances in recent years. We survey the leading methods and the special components responsible for their successful implementations, together with an experimental analysis of computational tests on a challenging and diverse set of symmetric and asymmetric TSP benchmark problems. The foremost algorithms are represented by two families, deriving from the Lin–Kernighan (LK) method and the stem-and-cycle (S&C) method. We show how these families can be conveniently viewed within a common ejection chain framework which sheds light on their similarities and differences, and gives clues about the nature of potential enhancements to today’s best methods that may provide additional gains in solving large and difficult TSPs. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "47de26ecd5f759afa7361c7eff9e9b25",
"text": "At many teaching hospitals, it is common practice for on-call radiology residents to interpret radiology examinations; such reports are later reviewed and revised by an attending physician before being used for any decision making. In case there are substantial problems in the resident’s initial report, the resident is called and the problems are reviewed to prevent similar future reporting errors. However, due to the large volume of reports produced, attending physicians rarely discuss the problems side by side with residents, thus missing an educational opportunity. In this work, we introduce a pipeline to discriminate between reports with significant discrepancies and those with non-significant discrepancies. The former contain severe errors or mis-interpretations, thus representing a great learning opportunity for the resident; the latter presents only minor differences (often stylistic) and have a minor role in the education of a resident. By discriminating between the two, the proposed system could flag those reports that an attending radiology should definitely review with residents under their supervision. We evaluated our approach on 350 manually annotated radiology reports sampled from a collection of tens of thousands. The proposed classifier achieves an Area Under the Curve (AUC) of 0.837, which represent a 14% improvement over the baselines. Furthermore, the classifier reduces the False Negative Rate (FNR) by 52%, a desirable performance metric for any recall-oriented task such as the one studied",
"title": ""
},
{
"docid": "48485e967c5aa345a53b91b47cc0e6d0",
"text": "The buccinator musculomucosal flaps are actually considered the main reconstructive option for small-moderate defects of the oral mucosa. In this paper we present our experience with the posteriorly based buccinator musculomucosal flap. A retrospective review was performed of all patients who had had a Bozola flap reconstruction at the Operative Unit of Maxillo-Facial Surgery of Parma, Italy, between 2003 and 2010. The Bozola flap was used in 19 patients. In most cases they had defects of the palate (n=12). All flaps were harvested successfully and no major complications occurred. Minor complications were observed in two cases. At the end of the follow up all patients returned to a normal diet without alterations of speech and swallowing. We consider the Bozola flap the first choice for the reconstruction of defects involving the palate, the cheek and the postero-lateral tongue and floor of the mouth.",
"title": ""
},
{
"docid": "d7f743ddff9863b046ab91304b37a667",
"text": "In sensor networks, passive localization can be performed by exploiting the received signals of unknown emitters. In this paper, the Time of Arrival (TOA) measurements are investigated. Often, the unknown time of emission is eliminated by calculating the difference between two TOA measurements where Time Difference of Arrival (TDOA) measurements are obtained. In TOA processing, additionally, the unknown time of emission is to be estimated. Therefore, the target state is extended by the unknown time of emission. A comparison is performed investigating the attainable accuracies for localization based on TDOA and TOA measurements given by the Cramér-Rao Lower Bound (CRLB). Using the Maximum Likelihood estimator, some characteristic features of the cost functions are investigated indicating a better performance of the TOA approach. But counterintuitive, Monte Carlo simulations do not support this indication, but show the comparability of TDOA and TOA localization.",
"title": ""
},
{
"docid": "8a37001733b0ee384277526bd864fe04",
"text": "Miscreants use DDoS botnets to attack a victim via a large number of malware-infected hosts, combining the bandwidth of the individual PCs. Such botnets have thus a high potential to render targeted services unavailable. However, the actual impact of attacks by DDoS botnets has never been evaluated. In this paper, we monitor C&C servers of 14 DirtJumper and Yoddos botnets and record the DDoS targets of these networks. We then aim to evaluate the availability of the DDoS victims, using a variety of measurements such as TCP response times and analyzing the HTTP content. We show that more than 65% of the victims are severely affected by the DDoS attacks, while also a few DDoS attacks likely failed.",
"title": ""
},
{
"docid": "7c1c7eb4f011ace0734dd52759ce077f",
"text": "OBJECTIVES\nTo investigate the treatment effects of bilateral robotic priming combined with the task-oriented approach on motor impairment, disability, daily function, and quality of life in patients with subacute stroke.\n\n\nDESIGN\nA randomized controlled trial.\n\n\nSETTING\nOccupational therapy clinics in medical centers.\n\n\nSUBJECTS\nThirty-one subacute stroke patients were recruited.\n\n\nINTERVENTIONS\nParticipants were randomly assigned to receive bilateral priming combined with the task-oriented approach (i.e., primed group) or to the task-oriented approach alone (i.e., unprimed group) for 90 minutes/day, 5 days/week for 4 weeks. The primed group began with the bilateral priming technique by using a bimanual robot-aided device.\n\n\nMAIN MEASURES\nMotor impairments were assessed by the Fugal-Meyer Assessment, grip strength, and the Box and Block Test. Disability and daily function were measured by the modified Rankin Scale, the Functional Independence Measure, and actigraphy. Quality of life was examined by the Stroke Impact Scale.\n\n\nRESULTS\nThe primed and unprimed groups improved significantly on most outcomes over time. The primed group demonstrated significantly better improvement on the Stroke Impact Scale strength subscale ( p = 0.012) and a trend for greater improvement on the modified Rankin Scale ( p = 0.065) than the unprimed group.\n\n\nCONCLUSION\nBilateral priming combined with the task-oriented approach elicited more improvements in self-reported strength and disability degrees than the task-oriented approach by itself. Further large-scale research with at least 31 participants in each intervention group is suggested to confirm the study findings.",
"title": ""
},
{
"docid": "52c0c6d1deacdca44df5000b2b437c78",
"text": "This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation- maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.",
"title": ""
},
{
"docid": "2e987add43a584bdd0a67800ad28c5f8",
"text": "The bones of elderly people with osteoporosis are susceptible to either traumatic fracture as a result of external impact, such as what happens during a fall, or even spontaneous fracture without trauma as a result of muscle contraction [1, 2]. Understanding the fracture behavior of bone tissue will help researchers find proper treatments to strengthen the bone in order to prevent such fractures, and design better implants to reduce the chance of secondary fracture after receiving the implant.",
"title": ""
},
{
"docid": "863db7439c2117e36cc2a789b557a665",
"text": "A core brain network has been proposed to underlie a number of different processes, including remembering, prospection, navigation, and theory of mind [Buckner, R. L., & Carroll, D. C. Self-projection and the brain. Trends in Cognitive Sciences, 11, 49–57, 2007]. This purported network—medial prefrontal, medial-temporal, and medial and lateral parietal regions—is similar to that observed during default-mode processing and has been argued to represent self-projection [Buckner, R. L., & Carroll, D. C. Self-projection and the brain. Trends in Cognitive Sciences, 11, 49–57, 2007] or scene-construction [Hassabis, D., & Maguire, E. A. Deconstructing episodic memory with construction. Trends in Cognitive Sciences, 11, 299–306, 2007]. To date, no systematic and quantitative demonstration of evidence for this common network has been presented. Using the activation likelihood estimation (ALE) approach, we conducted four separate quantitative meta-analyses of neuroimaging studies on: (a) autobiographical memory, (b) navigation, (c) theory of mind, and (d) default mode. A conjunction analysis between these domains demonstrated a high degree of correspondence. We compared these findings to a separate ALE analysis of prospection studies and found additional correspondence. Across all domains, and consistent with the proposed network, correspondence was found within the medial-temporal lobe, precuneus, posterior cingulate, retrosplenial cortex, and the temporo-parietal junction. Additionally, this study revealed that the core network extends to lateral prefrontal and occipital cortices. Autobiographical memory, prospection, theory of mind, and default mode demonstrated further reliable involvement of the medial prefrontal cortex and lateral temporal cortices. Autobiographical memory and theory of mind, previously studied as distinct, exhibited extensive functional overlap. These findings represent quantitative evidence for a core network underlying a variety of cognitive domains.",
"title": ""
},
{
"docid": "566412870c83e5e44fabc50487b9d994",
"text": "The influence of technology in the field of gambling innovation continues to grow at a rapid pace. After a brief overview of gambling technologies and deregulation issues, this review examines the impact of technology on gambling by highlighting salient factors in the rise of Internet gambling (i.e., accessibility, affordability, anonymity, convenience, escape immersion/dissociation, disinhibition, event frequency, asociability, interactivity, and simulation). The paper also examines other factors in relation to Internet gambling including the relationship between Internet addiction and Internet gambling addiction. The paper ends by overviewing some of the social issues surrounding Internet gambling (i.e., protection of the vulnerable, Internet gambling in the workplace, electronic cash, and unscrupulous operators). Recommendations for Internet gambling operators are also provided.",
"title": ""
},
{
"docid": "28574c82a49b096b11f1b78b5d62e425",
"text": "A major reason for the current reproducibility crisis in the life sciences is the poor implementation of quality control measures and reporting standards. Improvement is needed, especially regarding increasingly complex in vitro methods. Good Cell Culture Practice (GCCP) was an effort from 1996 to 2005 to develop such minimum quality standards also applicable in academia. This paper summarizes recent key developments in in vitro cell culture and addresses the issues resulting for GCCP, e.g. the development of induced pluripotent stem cells (iPSCs) and gene-edited cells. It further deals with human stem-cell-derived models and bioengineering of organo-typic cell cultures, including organoids, organ-on-chip and human-on-chip approaches. Commercial vendors and cell banks have made human primary cells more widely available over the last decade, increasing their use, but also requiring specific guidance as to GCCP. The characterization of cell culture systems including high-content imaging and high-throughput measurement technologies increasingly combined with more complex cell and tissue cultures represent a further challenge for GCCP. The increasing use of gene editing techniques to generate and modify in vitro culture models also requires discussion of its impact on GCCP. International (often varying) legislations and market forces originating from the commercialization of cell and tissue products and technologies are further impacting on the need for the use of GCCP. This report summarizes the recommendations of the second of two workshops, held in Germany in December 2015, aiming map the challenge and organize the process or developing a revised GCCP 2.0.",
"title": ""
},
{
"docid": "59c2e1dcf41843d859287124cc655b05",
"text": "Atherosclerotic cardiovascular disease (ASCVD) is the most common cause of death in most Western countries. Nutrition factors contribute importantly to this high risk for ASCVD. Favourable alterations in diet can reduce six of the nine major risk factors for ASCVD, i.e. high serum LDL-cholesterol levels, high fasting serum triacylglycerol levels, low HDL-cholesterol levels, hypertension, diabetes and obesity. Wholegrain foods may be one the healthiest choices individuals can make to lower the risk for ASCVD. Epidemiological studies indicate that individuals with higher levels (in the highest quintile) of whole-grain intake have a 29 % lower risk for ASCVD than individuals with lower levels (lowest quintile) of whole-grain intake. It is of interest that neither the highest levels of cereal fibre nor the highest levels of refined cereals provide appreciable protection against ASCVD. Generous intake of whole grains also provides protection from development of diabetes and obesity. Diets rich in wholegrain foods tend to decrease serum LDL-cholesterol and triacylglycerol levels as well as blood pressure while increasing serum HDL-cholesterol levels. Whole-grain intake may also favourably alter antioxidant status, serum homocysteine levels, vascular reactivity and the inflammatory state. Whole-grain components that appear to make major contributions to these protective effects are: dietary fibre; vitamins; minerals; antioxidants; phytosterols; other phytochemicals. Three servings of whole grains daily are recommended to provide these health benefits.",
"title": ""
},
{
"docid": "66370e97fba315711708b13e0a1c9600",
"text": "Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.",
"title": ""
},
{
"docid": "a2f65eb4a81bc44ea810d834ab33d891",
"text": "This survey provides the basis for developing research in the area of mobile manipulator performance measurement, an area that has relatively few research articles when compared to other mobile manipulator research areas. The survey provides a literature review of mobile manipulator research with examples of experimental applications. The survey also provides an extensive list of planning and control references as this has been the major research focus for mobile manipulators which factors into performance measurement of the system. The survey then reviews performance metrics considered for mobile robots, robot arms, and mobile manipulators and the systems that measure their performance, including machine tool measurement systems through dynamic motion tracking systems. Lastly, the survey includes a section on research that has occurred for performance measurement of robots, mobile robots, and mobile manipulators beginning with calibration, standards, and mobile manipulator artifacts that are being considered for evaluation of mobile manipulator performance.",
"title": ""
},
{
"docid": "d56807574d6185c6e3cd9a8e277f8006",
"text": "There is a substantial literature on e-government that discusses information and communication technology (ICT) as an instrument for reducing the role of bureaucracy in government organizations. The purpose of this paper is to offer a critical discussion of this literature and to provide a complementary argument, which favors the use of ICT in the public sector to support the operations of bureaucratic organizations. Based on the findings of a case study – of the Venice municipality in Italy – the paper discusses how ICT can be used to support rather than eliminate bureaucracy. Using the concepts of e-bureaucracy and functional simplification and closure, the paper proposes evidence and support for the argument that bureaucracy should be preserved and enhanced where e-government policies are concerned. Functional simplification and closure are very valuable concepts for explaining why this should be a viable approach.",
"title": ""
},
{
"docid": "77bbeb9510f4c9000291910bf06e4a22",
"text": "Traveling Salesman Problem is an important optimization issue of many fields such as transportation, logistics and semiconductor industries and it is about finding a Hamiltonian path with minimum cost. To solve this problem, many researchers have proposed different approaches including metaheuristic methods. Artificial Bee Colony algorithm is a well known swarm based optimization technique. In this paper we propose a new Artificial Bee Colony algorithm called Combinatorial ABC for Traveling Salesman Problem. Simulation results show that this Artificial Bee Colony algorithm can be used for combinatorial optimization problems.",
"title": ""
},
{
"docid": "dbafea1fbab901ff5a53f752f3bfb4b8",
"text": "Three studies were conducted to test the hypothesis that high trait aggressive individuals are more affected by violent media than are low trait aggressive individuals. In Study 1, participants read film descriptions and then chose a film to watch. High trait aggressive individuals were more likely to choose a violent film to watch than were low trait aggressive individuals. In Study 2, participants reported their mood before and after the showing of a violet or nonviolent videotape. High trait aggressive individuals felt more angry after viewing the violent videotape than did low trait aggressive individuals. In Study 3, participants first viewed either a violent or a nonviolent videotape and then competed with an \"opponent\" on a reaction time task in which the loser received a blast of unpleasant noise. Videotape violence was more likely to increase aggression in high trait aggressive individuals than in low trait aggressive individuals.",
"title": ""
},
{
"docid": "de761c4e3e79b5b4d056552e0a71a7b6",
"text": "A novel multiple-input multiple-output (MIMO) dielectric resonator antenna (DRA) for long term evolution (LTE) femtocell base stations is described. The proposed antenna is able to transmit and receive information independently using TE and HE modes in the LTE bands 12 (698-716 MHz, 728-746 MHz) and 17 (704-716 MHz, 734-746 MHz). A systematic design method based on perturbation theory is proposed to induce mode degeneration for MIMO operation. Through perturbing the boundary of the DRA, the amount of energy stored by a specific mode is changed as well as the resonant frequency of that mode. Hence, by introducing an adequate boundary perturbation, the TE and HE modes of the DRA will resonate at the same frequency and share a common impedance bandwidth. The simulated mutual coupling between the modes was as low as - 40 dB . It was estimated that in a rich scattering environment with an Signal-to-Noise Ratio (SNR) of 20 dB per receiver branch, the proposed MIMO DRA was able to achieve a channel capacity of 11.1 b/s/Hz (as compared to theoretical maximum 2 × 2 capacity of 13.4 b/s/Hz). Our experimental measurements successfully demonstrated the design methodology proposed in this work.",
"title": ""
}
] |
scidocsrr
|
f172512c8d31844ec68149e88c094982
|
Cellulose chemical markers in transformer oil insulation Part 1: Temperature correction factors
|
[
{
"docid": "7a4f42c389dbca2f3c13469204a22edd",
"text": "This article attempts to capture and summarize the known technical information and recommendations for analysis of furan test results. It will also provide the technical basis for continued gathering and evaluation of furan data for liquid power transformers, and provide a recommended structure for collecting that data.",
"title": ""
}
] |
[
{
"docid": "94f39416ba9918e664fb1cd48732e3ae",
"text": "In this paper, a nanostructured biosensor is developed to detect glucose in tear by using fluorescence resonance energy transfer (FRET) quenching mechanism. The designed FRET pair, including the donor, CdSe/ZnS quantum dots (QDs), and the acceptor, dextran-binding malachite green (MG-dextran), was conjugated to concanavalin A (Con A), an enzyme with specific affinity to glucose. In the presence of glucose, the quenched emission of QDs through the FRET mechanism is restored by displacing the dextran from Con A. To have a dual-modulation sensor for convenient and accurate detection, the nanostructured FRET sensors were assembled onto a patterned ZnO nanorod array deposited on the synthetic silicone hydrogel. Consequently, the concentration of glucose detected by the patterned sensor can be converted to fluorescence spectra with high signal-to-noise ratio and calibrated image pixel value. The photoluminescence intensity of the patterned FRET sensor increases linearly with increasing concentration of glucose from 0.03mmol/L to 3mmol/L, which covers the range of tear glucose levels for both diabetics and healthy subjects. Meanwhile, the calibrated values of pixel intensities of the fluorescence images captured by a handhold fluorescence microscope increases with increasing glucose. Four male Sprague-Dawley rats with different blood glucose concentrations were utilized to demonstrate the quick response of the patterned FRET sensor to 2µL of tear samples.",
"title": ""
},
{
"docid": "1274656b97db1f736944c174a174925d",
"text": "In full-duplex systems, due to the strong self-interference signal, system nonlinearities become a significant limiting factor that bounds the possible cancellable self-interference power. In this paper, a self-interference cancellation scheme for full-duplex orthogonal frequency division multiplexing systems is proposed. The proposed scheme increases the amount of cancellable self-interference power by suppressing the distortion caused by the transmitter and receiver nonlinearities. An iterative technique is used to jointly estimate the self-interference channel and the nonlinearity coefficients required to suppress the distortion signal. The performance is numerically investigated showing that the proposed scheme achieves a performance that is less than 0.5dB off the performance of a linear full-duplex system.",
"title": ""
},
{
"docid": "fac92316ce84b0c10b0bef2827d78b03",
"text": "Background: High rates of teacher turnover likely mean greater school instability, disruption of curricular cohesiveness, and a continual need to hire inexperienced teachers, who typically are less effective, as replacements for teachers who leave. Unfortunately, research consistently finds that teachers who work in schools with large numbers of poor students and students of color feel less satisfied and are more likely to turn over, meaning that turnover is concentrated in the very schools that would benefit most from a stable staff of experienced teachers. Despite the potential challenge that this turnover disparity poses for equity of educational opportunity and student performance gaps across schools, little research has examined the reasons for elevated teacher turnover in schools with large numbers of traditionally disadvantaged students. Purpose: This study hypothesizes that school working conditions help explain both teacher satisfaction and turnover. In particular, it focuses on the role effective principals in retaining teachers, particularly in disadvantaged schools with the greatest staffing challenges. Research Design: The study conducts quantitative analysis of national data from the 2003-04 Schools and Staffing Survey and 2004-05 Teacher Follow-up Survey. Regression analyses combat the potential for bias from omitted variables by utilizing an extensive set of control variables and employing a school district fixed effects approach that implicitly makes comparisons among principals and teachers within the same local context. Conclusions: Descriptive analyses confirm that observable measures of teachers‘ work environments, including ratings of the effectiveness of the principal, generally are lower in schools with large numbers of disadvantaged students. Regression results show that principal effectiveness is associated with greater teacher satisfaction and a lower probability that the teacher leaves the school within a year. Moreover, the positive impacts of principal effectiveness on these teacher outcomes are even greater in disadvantaged schools. These findings suggest that policies focused on getting the best principals into the most challenging school environments may be effective strategies for lowering perpetually high teacher turnover rates in those schools.",
"title": ""
},
{
"docid": "9955e99d9eba166458f5551551ab05e3",
"text": "Every day, millions of tons of temperature sensitive goods are produced, transported, stored or distributed worldwide. For all these products the control of temperature is essential. The term “cold chain” describes the series of interdependent equipment and processes employed to ensure the temperature preservation of perishables and other temperaturecontrolled products from the production to the consumption end in a safe, wholesome, and good quality state (Zhang, 2007). In other words, it is a supply chain of temperature sensitive products. So temperature-control is the key point in cold chain operation and the most important factor when prolonging the practical shelf life of produce. Thus, the major challenge is to ensure a continuous ‘cold chain’ from producer to consumer in order to guaranty prime condition of goods (Ruiz-Garcia et al., 2007).These products can be perishable items like fruit, vegetables, flowers, fish, meat and dairy products or medical products like drugs, blood, vaccines, organs, plasma and tissues. All of them can have their properties affected by temperature changes. Also some chemicals and electronic components like microchips are temperature sensitive.",
"title": ""
},
{
"docid": "e948583ef067952fa8c968de5e5ae643",
"text": "A key problem in learning representations of multiple objects from unlabeled images is that it is a priori impossible to tell which part of the image corresponds to each individual object, and which part is irrelevant clutter. Distinguishing individual objects in a scene would allow unsupervised learning of multiple objects from unlabeled images. There is psychophysical and neurophysiological evidence that the brain employs visual attention to select relevant parts of the image and to serialize the perception of individual objects. We propose a method for the selection of salient regions likely to contain objects, based on bottom-up visual attention. By comparing the performance of David Lowe s recognition algorithm with and without attention, we demonstrate in our experiments that the proposed approach can enable one-shot learning of multiple objects from complex scenes, and that it can strongly improve learning and recognition performance in the presence of large amounts of clutter. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a55422a96369797c7d42cb77dc99c6dc",
"text": "In order to store massive image data in real-time system, a high performance Serial Advanced Technology Attachment[1] (SATA) controller is proposed in this paper. RocketIO GTX transceiver[2] realizes physical layer of SATA protocol. Link layer and transport layers are implemented in VHDL with programmable logic resources. Application layer is developed on POWERPC440 embedded in Xilinx Virtex-5 FPGA. The whole SATA protocol implement in a platform FPGA has better features in expansibility, scalability, improvability and in-system programmability comparing with realizing it using Application Specific Integrated Circuit (ASIC). The experiment results shown that the controller works accurately and stably and the maximal sustained orderly data transfer rate up to 110 MB/s when connect to SATA hard disk. The high performance of the host SATA controller makes it possible that cheap SATA hard disk instead expensive Small Computer System Interface (SCSI) hard disk in some application. The controller is very suited for high speed mass data storage in embedded system.",
"title": ""
},
{
"docid": "df63ca9286b2fc520d6be36edb7afaef",
"text": "To analyse the accuracy of dual-energy contrast-enhanced spectral mammography in dense breasts in comparison with contrast-enhanced subtracted mammography (CESM) and conventional mammography (Mx). CESM cases of dense breasts with histological proof were evaluated in the present study. Four radiologists with varying experience in mammography interpretation blindly read Mx first, followed by CESM. The diagnostic profiles, consistency and learning curve were analysed statistically. One hundred lesions (28 benign and 72 breast malignancies) in 89 females were analysed. Use of CESM improved the cancer diagnosis by 21.2 % in sensitivity (71.5 % to 92.7 %), by 16.1 % in specificity (51.8 % to 67.9 %) and by 19.8 % in accuracy (65.9 % to 85.8 %) compared with Mx. The interobserver diagnostic consistency was markedly higher using CESM than using Mx alone (0.6235 vs. 0.3869 using the kappa ratio). The probability of a correct prediction was elevated from 80 % to 90 % after 75 consecutive case readings. CESM provided additional information with consistent improvement of the cancer diagnosis in dense breasts compared to Mx alone. The prediction of the diagnosis could be improved by the interpretation of a significant number of cases in the presence of 6 % benign contrast enhancement in this study. • DE-CESM improves the cancer diagnosis in dense breasts compared with mammography. • DE-CESM shows greater consistency than mammography alone by interobserver blind reading. • Diagnostic improvement of DE-CESM is independent of the mammographic reading experience.",
"title": ""
},
{
"docid": "0169f6c2eee1710d2ccd1403116da68f",
"text": "A resonant snubber is described for voltage-source inverters, current-source inverters, and self-commutated frequency changers. The main self-turn-off devices have shunt capacitors directly across them. The lossless resonant snubber described avoids trapping energy in a converter circuit where high dynamic stresses at both turn-on and turn-off are normally encountered. This is achieved by providing a temporary parallel path through a small ordinary thyristor (or other device operating in a similar node) to take over the high-stress turn-on duty from the main gate turn-off (GTO) or power transistor, in a manner that leaves no energy trapped after switching.<<ETX>>",
"title": ""
},
{
"docid": "dc323eabca83c4e9381539832dbb7f63",
"text": "We present the main freight transportation planning and management issues, briefly review the associated literature, describe a number of major developments, and identify trends and challenges. In order to keep the length of the paper within reasonable limits, we focus on long-haul, intercity, freight transportation. Optimization-based operations research methodologies are privileged. The paper starts with an overview of freight transportation systems and planning issues and continues with models which attempt to analyze multimodal, multicommodity transportation systems at the regional, national or global level. We then review location and network design formulations which are often associated with the long-term evolution of transportation systems and also appear prominently when service design issues are considered as described later on. Operational models and methods, particularly those aimed at the allocation and repositioning of resources such as empty vehicles, are then described. To conclude, we identify a number of interesting problems and challenges.",
"title": ""
},
{
"docid": "7ac2f63821256491f45e2a9666333853",
"text": "Precision-Recall analysis abounds in applications of binary classification where true negatives do not add value and hence should not affect assessment of the classifier’s performance. Perhaps inspired by the many advantages of receiver operating characteristic (ROC) curves and the area under such curves for accuracybased performance assessment, many researchers have taken to report PrecisionRecall (PR) curves and associated areas as performance metric. We demonstrate in this paper that this practice is fraught with difficulties, mainly because of incoherent scale assumptions – e.g., the area under a PR curve takes the arithmetic mean of precision values whereas the Fβ score applies the harmonic mean. We show how to fix this by plotting PR curves in a different coordinate system, and demonstrate that the new Precision-Recall-Gain curves inherit all key advantages of ROC curves. In particular, the area under Precision-Recall-Gain curves conveys an expected F1 score on a harmonic scale, and the convex hull of a PrecisionRecall-Gain curve allows us to calibrate the classifier’s scores so as to determine, for each operating point on the convex hull, the interval of β values for which the point optimises Fβ . We demonstrate experimentally that the area under traditional PR curves can easily favour models with lower expected F1 score than others, and so the use of Precision-Recall-Gain curves will result in better model selection.",
"title": ""
},
{
"docid": "fdd14b086d77b95b7ca00ab744f39458",
"text": "1567-4223/$34.00 Crown Copyright 2008 Publishe doi:10.1016/j.elerap.2008.11.001 * Corresponding author. Tel.: +886 7 5254713; fax: E-mail address: [email protected] (C.-C. H While eWOM advertising has recently emerged as an effective marketing strategy among marketing practitioners, comparatively few studies have been conducted to examine the eWOM from the perspective of pass-along emails. Based on social capital theory and social cognitive theory, this paper develops a model involving social enablers and personal cognition factors to explore the eWOM behavior and its efficacy. Data collected from 347 email users have lent credit to the model proposed. Tested by LISREL 8.70, the results indicate that the factors such as message involvement, social interaction tie, affection outcome expectations and message passing self-efficacy exert significant influences on pass-along email intentions (PAEIs). The study result may well be useful to marketing practitioners who are considering email marketing, especially to those who are in the process of selecting key email users and/or designing product advertisements to heighten the eWOM effect. Crown Copyright 2008 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a120d11f432017c3080bb4107dd7ea71",
"text": "Over the last decade, the zebrafish has entered the field of cardiovascular research as a new model organism. This is largely due to a number of highly successful small- and large-scale forward genetic screens, which have led to the identification of zebrafish mutants with cardiovascular defects. Genetic mapping and identification of the affected genes have resulted in novel insights into the molecular regulation of vertebrate cardiac development. More recently, the zebrafish has become an attractive model to study the effect of genetic variations identified in patients with cardiovascular defects by candidate gene or whole-genome-association studies. Thanks to an almost entirely sequenced genome and high conservation of gene function compared with humans, the zebrafish has proved highly informative to express and study human disease-related gene variants, providing novel insights into human cardiovascular disease mechanisms, and highlighting the suitability of the zebrafish as an excellent model to study human cardiovascular diseases. In this review, I discuss recent discoveries in the field of cardiac development and specific cases in which the zebrafish has been used to model human congenital and acquired cardiac diseases.",
"title": ""
},
{
"docid": "f0c8b45d2648de6825975cba4dd9d429",
"text": "This work presents a safe navigation approach for a carlike robot. The approach relies on a global motion planning based on Velocity Vector Fields along with a Dynamic Window Approach for avoiding unmodeled obstacles. Basically, the vector field is associated with a kinematic, feedback-linearization controller whose outputs are validated, and eventually modified, by the Dynamic Window Approach. Experiments with a full-size autonomous car equipped with a stereo camera show that the vehicle was able to track the vector field and avoid obstacles in its way.",
"title": ""
},
{
"docid": "6922a913c6ede96d5062f055b55377e7",
"text": "This paper presents the issue of a nonharmonic multitone generation with the use of singing bowls and the digital signal processors. The authors show the possibility of therapeutic applications of such multitone signals. Some known methods of the digital generation of the tone signal with the additional modulation are evaluated. Two projects of the very precise multitone generators are presented. In described generators, the digital signal processors synthesize the signal, while the additional microcontrollers realize the operator's interface. As a final result, the sound of the original singing bowls is confronted with the sound synthesized by one of the generators.",
"title": ""
},
{
"docid": "22654d2ed4c921c7bceb22ce9f9dc892",
"text": "xv",
"title": ""
},
{
"docid": "ddeb70a9abd07b113c8c7bfcf2f535b6",
"text": "Implementation of authentic leadership can affect not only the nursing workforce and the profession but the healthcare delivery system and society as a whole. Creating a healthy work environment for nursing practice is crucial to maintain an adequate nursing workforce; the stressful nature of the profession often leads to burnout, disability, and high absenteeism and ultimately contributes to the escalating shortage of nurses. Leaders play a pivotal role in retention of nurses by shaping the healthcare practice environment to produce quality outcomes for staff nurses and patients. Few guidelines are available, however, for creating and sustaining the critical elements of a healthy work environment. In 2005, the American Association of Critical-Care Nurses released a landmark publication specifying 6 standards (skilled communication, true collaboration, effective decision making, appropriate staffing, meaningful recognition, and authentic leadership) necessary to establish and sustain healthy work environments in healthcare. Authentic leadership was described as the \"glue\" needed to hold together a healthy work environment. Now, the roles and relationships of authentic leaders in the healthy work environment are clarified as follows: An expanded definition of authentic leadership and its attributes (eg, genuineness, trustworthiness, reliability, compassion, and believability) is presented. Mechanisms by which authentic leaders can create healthy work environments for practice (eg, engaging employees in the work environment to promote positive behaviors) are described. A practical guide on how to become an authentic leader is advanced. A research agenda to advance the study of authentic leadership in nursing practice through collaboration between nursing and business is proposed.",
"title": ""
},
{
"docid": "e72cfaa1d2781e7dda66625ce45bdebb",
"text": "Providing appropriate methods to facilitate the analysis of time-oriented data is a key issue in many application domains. In this paper, we focus on the unique role of the parameter time in the context of visually driven data analysis. We will discuss three major aspects - visualization, analysis, and the user. It will be illustrated that it is necessary to consider the characteristics of time when generating visual representations. For that purpose, we take a look at different types of time and present visual examples. Integrating visual and analytical methods has become an increasingly important issue. Therefore, we present our experiences in temporal data abstraction, principal component analysis, and clustering of larger volumes of time-oriented data. The third main aspect we discuss is supporting user-centered visual analysis. We describe event-based visualization as a promising means to adapt the visualization pipeline to needs and tasks of users.",
"title": ""
},
{
"docid": "7ebd355d65c8de8607da0363e8c86151",
"text": "In this letter, we compare the scanning beams of two leaky-wave antennas (LWAs), respectively, loaded with capacitive and inductive radiation elements, which have not been fully discussed in previous publications. It is pointed out that an LWA with only one type of radiation element suffers from a significant gain fluctuation over its beam-scanning band. To remedy this problem, we propose an LWA alternately loaded with inductive and capacitive elements along the host transmission line. The proposed LWA is able to steer its beam continuously from backward to forward with constant gain. A microstrip-based LWA is designed on the basis of the proposed method, and the measurement of its fabricated prototype demonstrates and confirms the desired results. This design method can widely be used to obtain LWAs with constant gain based on a variety of TLs.",
"title": ""
},
{
"docid": "32025802178ce122c288a558ba6572e4",
"text": "Based on this literature review, early orthodontic treatment of unilateral posterior crossbites with mandibular shifts is recommended. Treatment success is high if it is started early. Evidence that crossbites are not self-correcting, have some association with temporomandibular disorders and cause skeletal, dental and muscle adaptation provides further rationale for early treatment. It can be difficult to treat unilateral crossbites in adults without a combination of orthodontics and surgery. The most appropriate timing of treatment occurs when the patient is in the late deciduous or early mixed dentition stage as expansion modalities are very successful in this age group and permanent incisors are given more space as a result of the expansion. Treatment of unilateral posterior crossbites generally involves symmetric expansion of the maxillary arch, removal of selective occlusal interferences and elimination of the mandibular functional shift. The general practitioner and pediatric dentist must be able to diagnose unilateral posterior crossbites successfully and provide treatment or referral to take advantage of the benefits of early treatment.",
"title": ""
},
{
"docid": "dfcb51bd990cce7fb7abfe8802dc0c4e",
"text": "In this paper, we describe the machine learning approach we used in the context of the Automatic Cephalometric X-Ray Landmark Detection Challenge. Our solution is based on the use of ensembles of Extremely Randomized Trees combined with simple pixel-based multi-resolution features. By carefully tuning method parameters with cross-validation, our approach could reach detection rates ≥ 90% at an accuracy of 2.5mm for 8 landmarks. Our experiments show however a high variability between the different landmarks, with some landmarks detected at a much lower rate than others.",
"title": ""
}
] |
scidocsrr
|
608e65df2387725640588e9912acd554
|
Speeding up Semantic Segmentation for Autonomous Driving
|
[
{
"docid": "35625f248c81ebb5c20151147483f3f6",
"text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.",
"title": ""
}
] |
[
{
"docid": "3392de95bfc0e16776550b2a0a8fa00e",
"text": "This paper presents a new type of three-phase voltage source inverter (VSI), called three-phase dual-buck inverter. The proposed inverter does not need dead time, and thus avoids the shoot-through problems of traditional VSIs, and leads to greatly enhanced system reliability. Though it is still a hard-switching inverter, the topology allows the use of power MOSFETs as the active devices instead of IGBTs typically employed by traditional hard-switching VSIs. As a result, the inverter has the benefit of lower switching loss, and it can be designed at higher switching frequency to reduce current ripple and the size of passive components. A unified pulsewidth modulation (PWM) is introduced to reduce computational burden in real-time implementation. Different PWM methods were applied to a three-phase dual-buck inverter, including sinusoidal PWM (SPWM), space vector PWM (SVPWM) and discontinuous space vector PWM (DSVPWM). A 2.5 kW prototype of a three-phase dual-buck inverter and its control system has been designed and tested under different dc bus voltage and modulation index conditions to verify the feasibility of the circuit, the effectiveness of the controller, and to compare the features of different PWMs. Efficiency measurement of different PWMs has been conducted, and the inverter sees peak efficiency of 98.8% with DSVPWM.",
"title": ""
},
{
"docid": "4a51fa781609c0fab79fff536a14aa43",
"text": "Recently end-to-end speech recognition has obtained much attention. One of the popular models to achieve end-to-end speech recognition is attention based encoder-decoder model, which usually generating output sequences iteratively by attending the whole representations of the input sequences. However, predicting outputs until receiving the whole input sequence is not practical for online or low time latency speech recognition. In this paper, we present a simple but effective attention mechanism which can make the encoder-decoder model generate outputs without attending the entire input sequence and can apply to online speech recognition. At each prediction step, the attention is assumed to be a time-moving gaussian window with variable size and can be predicted by using previous input and output information instead of the content based computation on the whole input sequence. To further improve the online performance of the model, we employ deep convolutional neural networks as encoder. Experiments show that the gaussian prediction based attention works well and under the help of deep convolutional neural networks the online model achieves 19.5% phoneme error rate in TIMIT ASR task.",
"title": ""
},
{
"docid": "bc92aa05e989ead172274b4558aa4443",
"text": "A recent video coding standard, called High Efficiency Video Coding (HEVC), adopts two in-loop filters for coding efficiency improvement where the in-loop filtering is done by a de-blocking filter (DF) followed by sample adaptive offset (SAO) filtering. The DF helps improve both coding efficiency and subjective quality without signaling any bit to decoder sides while SAO filtering corrects the quantization errors by sending offset values to decoders. In this paper, we first present a new in-loop filtering technique using convolutional neural networks (CNN), called IFCNN, for coding efficiency and subjective visual quality improvement. The IFCNN does not require signaling bits by using the same trained weights in both encoders and decoder. The proposed IFCNN is trained in two different QP ranges: QR1 from QP = 20 to QP = 29; and QR2 from QP = 30 to QP = 39. In testing, the IFCNN trained in QR1 is applied for the encoding/decoding with QP values less than 30 while the IFCNN trained in QR2 is applied for the case of QP values greater than 29. The experiment results show that the proposed IFCNN outperforms the HEVC reference mode (HM) with average 1.9%-2.8% gain in BD-rate for Low Delay configuration, and average 1.6%-2.6% gain in BD-rate for Random Access configuration with IDR period 16.",
"title": ""
},
{
"docid": "e0223a5563e107308c88a43df5b1c8ba",
"text": "One question central to Reinforcement Learning is how to learn a feature representation that supports algorithm scaling and re-use of learned information from different tasks. Successor Features approach this problem by learning a feature representation that satisfies a temporal constraint. We present an implementation of an approach that decouples the feature representation from the reward function, making it suitable for transferring knowledge between domains. We then assess the advantages and limitations of using Successor Features for transfer.",
"title": ""
},
{
"docid": "b14ce16f81bf19c2e3ae1120b42f14c0",
"text": "Most robotic grasping tasks assume a stationary or fixed object. In this paper, we explore the requirements for tracking and grasping a moving object. The focus of our work is to achieve a high level of interaction between a real-time vision system capable of tracking moving objects in 3-D and a robot arm with gripper that can be used to pick up a moving object. There is an interest in exploring the interplay of hand-eye coordination for dynamic grasping tasks such as grasping of parts on a moving conveyor system, assembly of articulated parts, or for grasping from a mobile robotic system. Coordination between an organism's sensing modalities and motor control system is a hallmark of intelligent behavior, and we are pursuing the goal of building an integrated sensing and actuation system that can operate in dynamic as opposed to static environments. The system we have built addresses three distinct problems in robotic hand-eye coordination for grasping moving objects: fast computation of 3-D motion parameters from vision, predictive control of a moving robotic arm to track a moving object, and interception and grasping. The system is able to operate at approximately human arm movement rates, and experimental results in which a moving model train is tracked is presented, stably grasped, and picked up by the system. The algorithms we have developed that relate sensing to actuation are quite general and applicable to a variety of complex robotic tasks that require visual feedback for arm and hand control.",
"title": ""
},
{
"docid": "bd1ab7a30b4478a6320e5cad4698c2b4",
"text": "Corresponding Author: Jing Wang Boston University, Boston, MA, USA Email: [email protected] Abstract: Non-inferiority of a diagnostic test to the standard is a common issue in medical research. For instance, we may be interested in determining if a new diagnostic test is noninferior to the standard reference test because the new test might be inexpensive to the extent that some small inferior margin in sensitivity or specificity may be acceptable. Noninferiority trials are also found to be useful in clinical trials, such as image studies, where the data are collected in pairs. Conventional noninferiority trials for paired binary data are designed with a fixed sample size and no interim analysis is allowed. Adaptive design which allows for interim modifications of the trial becomes very popular in recent years and are widely used in clinical trials because of its efficiency. However, to our knowledge there is no adaptive design method available for noninferiority trial with paired binary data. In this study, we developed an adaptive design method for non-inferiority trials with paired binary data, which can also be used for superiority trials when the noninferiority margin is set to zero. We included a trial example and provided the SAS program for the design simulations.",
"title": ""
},
{
"docid": "9a05c95de1484df50a5540b31df1a010",
"text": "Resumen. Este trabajo trata sobre un sistema de monitoreo remoto a través de una pantalla inteligente para sensores de temperatura y corriente utilizando una red híbrida CAN−ZIGBEE. El CAN bus es usado como medio de transmisión de datos a corta distancia mientras que Zigbee es empleado para que cada nodo de la red pueda interactuar de manera inalámbrica con el nodo principal. De esta manera la red híbrida combina las ventajas de cada protocolo de comunicación para intercambiar datos. El sistema cuenta con cuatro nodos, dos son CAN y reciben la información de los sensores y el resto son Zigbee. Estos nodos están a cargo de transmitir la información de un nodo CAN de manera inalámbrica y desplegarla en una pantalla inteligente.",
"title": ""
},
{
"docid": "cf52d720512c316dc25f8167d5571162",
"text": "BACKGROUND\nHidradenitis suppurativa (HS) is a chronic relapsing skin disease. Recent studies have shown promising results of anti-tumor necrosis factor-alpha treatment.\n\n\nOBJECTIVE\nTo compare the efficacy and safety of infliximab and adalimumab in the treatment of HS.\n\n\nMETHODS\nA retrospective study was performed to compare 2 cohorts of 10 adult patients suffering from severe, recalcitrant HS. In 2005, 10 patients were treated with infliximab intravenous (i.v.) (3 infusions of 5 mg/kg at weeks 0, 2, and 6). In 2009, 10 other patients were treated in the same hospital with adalimumab subcutaneous (s.c.) 40 mg every other week. Both cohorts were followed up for 1 year using identical evaluation methods [Sartorius score, quality of life index, reduction of erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP), patient and doctor global assessment, and duration of efficacy].\n\n\nRESULTS\nNineteen patients completed the study. In both groups, the severity of the HS diminished. Infliximab performed better in all aspects. The average Sartorius score was reduced to 54% of baseline for the infliximab group and 66% of baseline for the adalimumab group.\n\n\nCONCLUSIONS\nAdalimumab s.c. 40 mg every other week is less effective than infliximab i.v. 5 mg/kg at weeks 0, 2, and 6.",
"title": ""
},
{
"docid": "d22c69d0c546dfb4ee5d38349bf7154f",
"text": "Investigation of functional brain connectivity patterns using functional MRI has received significant interest in the neuroimaging domain. Brain functional connectivity alterations have widely been exploited for diagnosis and prediction of various brain disorders. Over the last several years, the research community has made tremendous advancements in constructing brain functional connectivity from timeseries functional MRI signals using computational methods. However, even modern machine learning techniques rely on conventional correlation and distance measures as a basic step towards the calculation of the functional connectivity. Such measures might not be able to capture the latent characteristics of raw time-series signals. To overcome this shortcoming, we propose a novel convolutional neural network based model, FCNet, that extracts functional connectivity directly from raw fMRI time-series signals. The FCNet consists of a convolutional neural network that extracts features from time-series signals and a fully connected network that computes the similarity between the extracted features in a Siamese architecture. The functional connectivity computed using FCNet is combined with phenotypic information and used to classify individuals as healthy controls or neurological disorder subjects. Experimental results on the publicly available ADHD-200 dataset demonstrate that this innovative framework can improve classification accuracy, which indicates that the features learnt from FCNet have superior discriminative power.",
"title": ""
},
{
"docid": "ad3437a7458e9152f3eb451e5c1af10f",
"text": "In recent years the number of academic publication increased strongly. As this information flood grows, it becomes more difficult for researchers to find relevant literature effectively. To overcome this difficulty, recommendation systems can be used which often utilize text similarity to find related documents. To improve those systems we add scientometrics as a ranking measure for popularity into these algorithms. In this paper we analyse whether and how scientometrics are useful in a recommender system.",
"title": ""
},
{
"docid": "97841476457ac6599e005367d1ffc5b9",
"text": "Robust vigilance estimation during driving is very crucial in preventing traffic accidents. Many approaches have been proposed for vigilance estimation. However, most of the approaches require collecting subject-specific labeled data for calibration which is high-cost for real-world applications. To solve this problem, domain adaptation methods can be used to align distributions of source subject features (source domain) and new subject features (target domain). By reusing existing data from other subjects, no labeled data of new subjects is required to train models. In this paper, our goal is to apply adversarial domain adaptation networks to cross-subject vigilance estimation. We adopt two kinds of recently proposed adversarial domain adaptation networks and compare their performance with those of several traditional domain adaptation methods and the baseline without domain adaptation. A publicly available dataset, SEED-VIG, is used to evaluate the methods. The dataset includes electroencephalography (EEG) and electrooculography (EOG) signals, as well as the corresponding vigilance level annotations during simulated driving. Compared with the baseline, both adversarial domain adaptation networks achieve improvements over 10% in terms of Pearson’s correlation coefficient. In addition, both methods considerably outperform the traditional domain adaptation methods.",
"title": ""
},
{
"docid": "a49962a29221a26df3d7c4ef9034d61a",
"text": "In this paper we discuss the evolution of mobility management mechanisms in mobile networks. We emphasize problems with current mobility management approaches in case of very high dense and heterogeneous networks. The main contribution of the paper is a discussion on how the Software-Defined Networking (SDN) technology can be applied in mobile networks in order to efficiently handle mobility in the context of future mobile networks (5G) or evolved LTE. The discussion addresses the most important problems related to mobility management like preservation of session continuity and scalability of handovers in very dense mobile networks. Three variants of SDN usage in order to handle mobility are described and compared in this paper. The most advanced of these variants shows how mobility management mechanisms can be easily integrated with autonomie management mechanisms, providing much more advanced functionality than is provided now by the SON approach. Such mechanisms increase robustness of the handover and optimize the usage of wireless and wired mobile network resources.",
"title": ""
},
{
"docid": "65ac52564041b0c2e173560d49ec762f",
"text": "Constructionism can be a powerful framework for teaching complex content to novices. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn this content in contextualized, personally-meaningful ways. In this paper, we investigate the relevance of a set of approaches broadly called “educational data mining” or “learning analytics” (henceforth, EDM) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. We suggest that EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition but also to wider communities. Finally, we explore potential collaborations between researchers in the EDM and constructionist traditions; such collaborations have the potential to enhance the ability of constructionist researchers to make rich inference about learning and learners, while providing EDM researchers with many interesting new research questions and challenges. In recent years, project-based, student-centered approaches to education have gained prominence, due in part to an increased demand for higher-level skills in the job market (Levi and Murname, 2004), positive research findings on the effectiveness of such approaches (Barron, Pearson, et al., 2008), and a broader acceptance in public policy circles, as shown, for example, by the Next Generation Science Standards (NGSS Lead States, 2013). While several approaches for this type of learning exist, Constructionism is one of the most popular and well-developed ones (Papert, 1980). In this paper, we investigate the relevance of a set of approaches called “educational data mining” or “learning analytics” (henceforth abbreviated as ‘EDM’) (R. Baker & Yacef, 2009; Romero & Ventura, 2010a; R. Baker & Siemens, in press) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. As such, EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition and to the wider community of learning scientists and policymakers. EDM, broadly, is a set of methods that apply data mining and machine learning techniques such as prediction, classification, and discovery of latent structural regularities to rich, voluminous, and idiosyncratic educational data, potentially similar to those data generated by many constructionist learning environments which allows students to explore and build their own artifacts, computer programs, and media pieces. As such, we identify four axes in which EDM methods may be helpful for constructionist research: 1. EDM methods do not require constructionists to abandon deep qualitative analysis for simplistic summative or confirmatory quantitative analysis; 2. EDM methods can generate different and complementary new analyses to support qualitative research; 3. By enabling precise formative assessments of complex constructs, EDM methods can support an increase in methodological rigor and replicability; 4. EDM can be used to present comprehensible and actionable data to learners and teachers in situ. In order to investigate those axes, we start by describing our perspective on compatibilities and incompatibilities between constructionism and EDM. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn that complex content in connected, meaningful ways. Constructionist projects often emphasize making those artifacts (and often data) public, socially relevant, and personally meaningful to learners, and encourage working in social spaces such that learners engage each other to accelerate the learning process. diSessa and Cobb (2004) argue that constructionism serves a framework for action, as it describes its own praxis (i.e., how it matches theory to practice). The learning theory supporting constructionism is classically constructivist, combining concepts from Piaget and Vygotsky (Fosnot, 2005). As constructionism matures as a constructivist framework for action and expands in scale, constructionist projects are becoming both more complex (Reynolds & Caperton, 2011), more scalable (Resnick, Maloney, et al., 2009), and more affordable for schools following significant development in low cost “construction” technologies such as robotics and 3D printers. As such, there have been increasing opportunities to learn more about how students learn in constructionist contexts, advancing the science of learning. These discoveries will have the potential to improve the quality of all constructivist learning experiences. For example, Wilensky and Reisman (2006) have shown how constructionist modeling and simulation can make science learning more accessible, Resnick (1998) has shown how constructionism can reframe programming as art at scale, Buechley & Eisenberg (2008) have used e-textiles to engage female students in robotics, Eisenberg (2011) and Blikstein (2013, 2014) use constructionist digital fabrication to successfully teach programming, engineering, and electronics in a novel, integrated way. The findings of these research and design projects have the potential to be useful to a wide external community of teachers, researchers, practitioners, and other stakeholders. However, connecting findings from the constructionist tradition to the goals of policymakers can be challenging, due to the historical differences in methodology and values between these communities. The resources needed to study such interventions at scale are considerable, given the need to carefully document, code, and analyze each student’s work processes and artifacts. The designs of constructionist research often result in findings that do not map to what researchers, outside interests, and policymakers are expecting, in contrast to conventional controlled studies, which are designed to (more conclusively) answer a limited set of sharply targeted research questions. Due the lack of a common ground to discuss benefits and scalability of constructionist and project-based designs, these designs have been too frequently sidelined to niche institutions such as private schools, museums, or atypical public schools. To understand what the role EDM methods can play in constructionist research, we must frame what we mean by constructionist research more precisely. We follow Papert and Harel (1991) in their situating of constructionism, but they do not constrain the term to one formal definition. The definition is further complicated by the fact that constructionism has many overlaps with other research and design traditions, such as constructivism and socio-constructivism themselves, as well as project-based pedagogies and inquiry-based designs. However, we believe that it is possible to define the subset of constructionism amenable to EDM, a focus we adopt in this article for brevity. In this paper, we focus on the constructionist literature dealing with students learning to construct understandings by constructing (physical or virtual) artifacts, where the students' learning environments are designed and constrained such that building artifacts in/with that environment is designed to help students construct their own understandings. In other words, we are focusing on creative work done in computational environments designed to foster creative and transformational learning, such as NetLogo (Wilensky, 1999), Scratch (Resnick, Maloney, et al., 2009), or LEGO Mindstorms. This sub-category of constructionism can and does generate considerable formative and summative data. It also has the benefit of having a history of success in the classroom. From Papert’s seminal (1972) work through today, constructionist learning has been shown to promote the development of deep understanding of relatively complex content, with many examples ranging from mathematics (Harel, 1990; Wilensky, 1996) to history (Zahn, Krauskopf, Hesse, & Pea, 2010). However, constructionist learning environments, ideas, and findings have yet to reach the majority of classrooms and have had incomplete influence in the broader education research community. There are several potential reasons for this. One of them may be a lack of demonstration that findings are generalizable across populations and across specific content. Another reason is that constructionist activities are seen to be timeconsuming for teachers (Warschauer & Matuchniak, 2010), though, in practice, it has been shown that supporting understanding through project-based work could actually save time (Fosnot, 2005) and enable classroom dynamics that may streamline class preparation (e.g., peer teaching or peer feedback). A last reason is that constructionists almost universally value more deep understanding of scientific principles than facts or procedural skills even in contexts (e.g., many classrooms) in which memorization of facts and procedural skills is the target to be evaluated (Abelson & diSessa, 1986; Papert & Harel, 1991). Therefore, much of what is learned in constructionist environments does not directly translate to test scores or other established metrics. Constructionist research can be useful and convincing to audiences that do not yet take full advantage of the scientific findings of this community, but it requires careful consideration of framing and evidence to reach them. Educational data mining methods pose the potential to both enhance constructionist research, and to support constructionist researchers in communicating their findings in a fashion that other researchers consider valid. Blikstein (2011, p. 110) made ",
"title": ""
},
{
"docid": "3e60194e452e0e7a478d7c5f563eaa13",
"text": "The use of data stored in transaction logs of Web search engines, Intranets, and Web sites can provide valuable insight into understanding the information-searching process of online searchers. This understanding can enlighten information system design, interface development, and devising the information architecture for content collections. This article presents a review and foundation for conducting Web search transaction log analysis. A methodology is outlined consisting of three stages, which are collection, preparation, and analysis. The three stages of the methodology are presented in detail with discussions of goals, metrics, and processes at each stage. Critical terms in transaction log analysis for Web searching are defined. The strengths and limitations of transaction log analysis as a research method are presented. An application to log client-side interactions that supplements transaction logs is reported on, and the application is made available for use by the research community. Suggestions are provided on ways to leverage the strengths of, while addressing the limitations of, transaction log analysis for Web-searching research. Finally, a complete flat text transaction log from a commercial search engine is available as supplementary material with this manuscript. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "f0d85230b2a6a14f9b291a9e08a29787",
"text": "In this paper, we propose a Computer Assisted Diagnosis (CAD) system based on a deep Convolutional Neural Network (CNN) model, to build an end-to-end learning process that classifies breast mass lesions. We investigate the impact that has transfer learning when large data is scarce, and explore the proper way to fine-tune the layers to learn features that are more specific to the new data. The proposed approach showed better performance compared to other proposals that classified the same dataset. 1 Background and objectives Breast cancer is the most common invasive disease among women [Siegel et al., 2014] Optimistically, an early diagnosis of the disease increases the chances of recovery dramatically and as such, makes the early detection crucial. Mammography is the recommended screening technique, but it is not enough, we also need the radiologist expertise to check the mammograms for lesions and give a diagnosis, which can be a very challenging task[Kerlikowske et al., 2000]. Radiologists often resort to biopsies and this ends up adding exorbitant expenses to an already burdened patient and health care system [Sickles, 1991]. We propose a Computer Assisted Diagnosis (CAD) system, based on a deep Convolutional Neural Network (CNN) model, designed to be used as a “second-opinion” to help the radiologist give more accurate diagnoses. Deep Learning requires large datasets to train networks of a certain depth from scratch, which are lacking in the medical domain especially for breast cancer. Transfer learning proved to be efficient to deal with little data, even if the knowledge transfer is between two very different domains [Shin et al., 2016]. But still using the technique can be tricky, especially with medical datasets that tend to be unbalanced and limited. And when using the state-of-the art CNNs which are very deep, the models are highly inclined to suffer from overfitting even with the use of many tricks like data augmentation, regularization and dropout. The number of layers to fine-tune and the optimization strategy play a substantial role on the overall performance [Yosinski et al., 2014]. This raises few questions: • Is Transfer Learning really beneficial for this application? • How can we avoid overfitting with our small dataset ? • How much fine-tuning do we need? and what is the proper way to do it? 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ar X iv :1 71 1. 10 75 2v 1 [ cs .C V ] 2 9 N ov 2 01 7 We investigate the proper way to perform transfer learning and fine-tuning, which will allow us to take advantage of the pre-trained weights and adapt them to our task of interest. We empirically analyze the impact of the fine-tuned fraction on the final results, and we propose to use an exponentially decaying learning rate to customize all the pre-trained weights from ImageNet[Deng et al., 2009] and make them more suited to our type of data. The best model can be used as a baseline to predict if a new “never-seen” breast mass lesion is benign or malignant.",
"title": ""
},
{
"docid": "40d8c7f1d24ef74fa34be7e557dca920",
"text": "the rapid changing Internet environment has formed a competitive business setting, which provides opportunities for conducting businesses online. Availability of online transaction systems enable users to buy and make payment for products and services using the Internet platform. Thus, customers’ involvements in online purchasing have become an important trend. However, since the market is comprised of many different people and cultures, with diverse viewpoints, e-commerce businesses are being challenged by the reality of complex behavior of consumers. Therefore, it is vital to identify the factors that affect consumers purchasing decision through e-commerce in respective cultures and societies. In response to this claim, the purpose of this study is to explore the factors affecting customers’ purchasing decision through e-commerce (online shopping). Several factors such as trust, satisfaction, return policy, cash on delivery, after sale service, cash back warranty, business reputation, social and individual attitude, are considered. At this stage, the factors mentioned above, which are commonly considered influencing purchasing decision through online shopping in literature, are hypothesized to measure the causal relationship within the framework.",
"title": ""
},
{
"docid": "719458301e92f1c5141971ea8a21342b",
"text": "In the 65 years since its formal specification, information theory has become an established statistical paradigm, providing powerful tools for quantifying probabilistic relationships. Behavior analysis has begun to adopt these tools as a novel means of measuring the interrelations between behavior, stimuli, and contingent outcomes. This approach holds great promise for making more precise determinations about the causes of behavior and the forms in which conditioning may be encoded by organisms. In addition to providing an introduction to the basics of information theory, we review some of the ways that information theory has informed the studies of Pavlovian conditioning, operant conditioning, and behavioral neuroscience. In addition to enriching each of these empirical domains, information theory has the potential to act as a common statistical framework by which results from different domains may be integrated, compared, and ultimately unified.",
"title": ""
},
{
"docid": "5d3a0b1dfdbffbd4465ad7a9bb2f6878",
"text": "The Cancer Genome Atlas (TCGA) is a public funded project that aims to catalogue and discover major cancer-causing genomic alterations to create a comprehensive \"atlas\" of cancer genomic profiles. So far, TCGA researchers have analysed large cohorts of over 30 human tumours through large-scale genome sequencing and integrated multi-dimensional analyses. Studies of individual cancer types, as well as comprehensive pan-cancer analyses have extended current knowledge of tumorigenesis. A major goal of the project was to provide publicly available datasets to help improve diagnostic methods, treatment standards, and finally to prevent cancer. This review discusses the current status of TCGA Research Network structure, purpose, and achievements.",
"title": ""
},
{
"docid": "aa0335bc5090796453d7efdc247bb477",
"text": "Understanding signature complexity has been shown to be a crucial facet for both forensic and biometric appbcations. The signature complexity can be defined as the difficulty that forgers have when imitating the dynamics (constructional aspects) of other users signatures. Knowledge of complexity along with others facets such stability and signature length can lead to more robust and secure automatic signature verification systems. The work presented in this paper investigates the creation of a novel mathematical model for the automatic assessment of the signature complexity, analysing a wider set of dynamic signature features and also incorporating a new layer of detail, investigating the complexity of individual signature strokes. To demonstrate the effectiveness of the model this work will attempt to reproduce the signature complexity assessment made by experienced FDEs on a dataset of 150 signature samples.",
"title": ""
},
{
"docid": "5f67840ff6a168c8609a20504e0bd19a",
"text": "The core motor symptoms of Parkinson's disease (PD) are attributable to the degeneration of dopaminergic neurons in the substantia nigra pars compacta (SNc). Mitochondrial oxidant stress is widely viewed a major factor in PD pathogenesis. Previous work has shown that activity-dependent calcium entry through L-type channels elevates perinuclear mitochondrial oxidant stress in SNc dopaminergic neurons, providing a potential basis for their selective vulnerability. What is less clear is whether this physiological stress is present in dendrites and if Lewy bodies, the major neuropathological lesion found in PD brains, exacerbate it. To pursue these questions, mesencephalic dopaminergic neurons derived from C57BL/6 transgenic mice were studied in primary cultures, allowing for visualization of soma and dendrites simultaneously. Many of the key features of in vivo adult dopaminergic neurons were recapitulated in vitro. Activity-dependent calcium entry through L-type channels increased mitochondrial oxidant stress in dendrites. This stress progressively increased with distance from the soma. Examination of SNc dopaminergic neurons ex vivo in brain slices verified this pattern. Moreover, the formation of intracellular α-synuclein Lewy-body-like aggregates increased mitochondrial oxidant stress in perinuclear and dendritic compartments. This stress appeared to be extramitochondrial in origin, because scavengers of cytosolic reactive oxygen species or inhibition of NADPH oxidase attenuated it. These results show that physiological and proteostatic stress can be additive in the soma and dendrites of vulnerable dopaminergic neurons, providing new insight into the factors underlying PD pathogenesis.",
"title": ""
}
] |
scidocsrr
|
27b6a1b43e2f004b195043c4a356d2f2
|
BLENDER: Enabling Local Search with a Hybrid Differential Privacy Model
|
[
{
"docid": "dbbd9f6440ee0c137ee0fb6a4aadba38",
"text": "In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over set-valued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a two-phase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budget-wise than obtaining the heavy hitters directly from the whole dataset. We provide both in-depth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings.",
"title": ""
},
{
"docid": "89e51b29bf1486795d0b70c5817b6a75",
"text": "In this paper, we propose the first formal privacy analysis of a data anonymization process known as the synthetic data generation, a technique becoming popular in the statistics community. The target application for this work is a mapping program that shows the commuting patterns of the population of the United States. The source data for this application were collected by the U.S. Census Bureau, but due to privacy constraints, they cannot be used directly by the mapping program. Instead, we generate synthetic data that statistically mimic the original data while providing privacy guarantees. We use these synthetic data as a surrogate for the original data. We find that while some existing definitions of privacy are inapplicable to our target application, others are too conservative and render the synthetic data useless since they guard against privacy breaches that are very unlikely. Moreover, the data in our target application is sparse, and none of the existing solutions are tailored to anonymize sparse data. In this paper, we propose solutions to address the above issues.",
"title": ""
}
] |
[
{
"docid": "7eebeb133a9881e69bf3c367b9e20751",
"text": "Advanced driver assistance systems or highly automated driving systems for lane change maneuvers are expected to enhance highway traffic safety, transport efficiency, and driver comfort. To extend the capability of current advanced driver assistance systems, and eventually progress to highly automated highway driving, the task of automatically determine if, when, and how to perform a lane change maneuver, is essential. This paper thereby presents a low-complexity lane change maneuver algorithm which determines whether a lane change maneuver is desirable, and if so, selects an appropriate inter-vehicle traffic gap and time instance to perform the maneuver, and calculates the corresponding longitudinal and lateral control trajectory. The ability of the proposed lane change maneuver algorithm to make appropriate maneuver decisions and generate smooth and safe lane change trajectories in various traffic situations is demonstrated by simulation and experimental results.",
"title": ""
},
{
"docid": "497fcf32281c8e9555ac975a3de45a6a",
"text": "This paper presents the framework, rules, games, controllers, and results of the first General Video Game Playing Competition, held at the IEEE Conference on Computational Intelligence and Games in 2014. The competition proposes the challenge of creating controllers for general video game play, where a single agent must be able to play many different games, some of them unknown to the participants at the time of submitting their entries. This test can be seen as an approximation of general artificial intelligence, as the amount of game-dependent heuristics needs to be severely limited. The games employed are stochastic real-time scenarios (where the time budget to provide the next action is measured in milliseconds) with different winning conditions, scoring mechanisms, sprite types, and available actions for the player. It is a responsibility of the agents to discover the mechanics of each game, the requirements to obtain a high score and the requisites to finally achieve victory. This paper describes all controllers submitted to the competition, with an in-depth description of four of them by their authors, including the winner and the runner-up entries of the contest. The paper also analyzes the performance of the different approaches submitted, and finally proposes future tracks for the competition.",
"title": ""
},
{
"docid": "0db1a54964702697ca08e40d12949771",
"text": "Synchronous and fixed-speed induction generators release the kinetic energy of their rotating mass when the power system frequency is reduced. In the case of doubly fed induction generator (DFIG)-based wind turbines, their control system operates to apply a restraining torque to the rotor according to a predetermined curve with respect to the rotor speed. This control system is not based on the power system frequency and there is negligible contribution to the inertia of the power system. A DFIG control system was modified to introduce inertia response to the DFIG wind turbine. Simulations were used to show that with the proposed control system, the DFIG wind turbine can supply considerably greater kinetic energy than a fixed-speed wind turbine.",
"title": ""
},
{
"docid": "5f351dc1334f43ce1c80a1e78581d0f9",
"text": "Based on keypoints extracted as salient image patches, an image can be described as a \"bag of visual words\" and this representation has been used in scene classification. The choice of dimension, selection, and weighting of visual words in this representation is crucial to the classification performance but has not been thoroughly studied in previous work. Given the analogy between this representation and the bag-of-words representation of text documents, we apply techniques used in text categorization, including term weighting, stop word removal, feature selection, to generate image representations that differ in the dimension, selection, and weighting of visual words. The impact of these representation choices to scene classification is studied through extensive experiments on the TRECVID and PASCAL collection. This study provides an empirical basis for designing visual-word representations that are likely to produce superior classification performance.",
"title": ""
},
{
"docid": "2a91eeedbb43438f9ed449e14d93ce8e",
"text": "In this paper, we introduce the concept of green noise—the midfrequency component of white noise—and its advantages over blue noise for digital halftoning. Unlike blue-noise dither patterns, which are composed exclusively of isolated pixels, green-noise dither patterns are composed of pixel-clusters making them less susceptible to image degradation from nonideal printing artifacts such as dot-gain. Although they are not the only techniques which generate clustered halftones, error-diffusion with output-dependent feedback and variations based on filter weight perturbation are shown to be good generators of green noise, thereby allowing for tunable coarseness. Using statistics developed for blue noise, we closely examine the spectral content of resulting dither patterns. We introduce two spatial-domain statistics for analyzing the spatial arrangement of pixels in aperiodic dither patterns, because greennoise patterns may be anisotropic, and therefore spectral statistics based on radial averages may be inappropriate for the study of these patterns.",
"title": ""
},
{
"docid": "60664c058868f08a67d14172d87a4756",
"text": "The design of legged robots is often inspired by animals evolved to excel at different tasks. However, while mimicking morphological features seen in nature can be very powerful, robots may need to perform motor tasks that their living counterparts do not. In the absence of designs that can be mimicked, an alternative is to resort to mathematical models that allow the relationship between a robot's form and function to be explored. In this paper, we propose such a model to co-design the motion and leg configurations of a robot such that a measure of performance is optimized. The framework begins by planning trajectories for a simplified model consisting of the center of mass and feet. The framework then optimizes the length of each leg link while solving for associated full-body motions. Our model was successfully used to find optimized designs for legged robots performing tasks that include jumping, walking, and climbing up a step. Although our results are preliminary and our analysis makes a number of simplifying assumptions, our findings indicate that the cost function, the sum of squared joint torques over the duration of a task, varies substantially as the design parameters change.",
"title": ""
},
{
"docid": "c777d2fcc2a27ca17ea82d4326592948",
"text": "The existing methods for image captioning usually train the language model under the cross entropy loss, which results in the exposure bias and inconsistency of evaluation metric. Recent research has shown these two issues can be well addressed by policy gradient method in reinforcement learning domain attributable to its unique capability of directly optimizing the discrete and non-differentiable evaluation metric. In this paper, we utilize reinforcement learning method to train the image captioning model. Specifically, we train our image captioning model to maximize the overall reward of the sentences by adopting the temporal-difference (TD) learning method, which takes the correlation between temporally successive actions into account. In this way, we assign different values to different words in one sampled sentence by a discounted coefficient when back-propagating the gradient with the REINFORCE algorithm, enabling the correlation between actions to be learned. Besides, instead of estimating a “baseline” to normalize the rewards with another network, we utilize the reward of another Monte-Carlo sample as the “baseline” to avoid high variance. We show that our proposed method can improve the quality of generated captions and outperforms the state-of-the-art methods on the benchmark dataset MS COCO in terms of seven evaluation metrics.",
"title": ""
},
{
"docid": "ef6160d304908ea87287f2071dea5f6d",
"text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.",
"title": ""
},
{
"docid": "ccc4b8f75e39488068293540aeb508e2",
"text": "We present a novel approach to sketching 2D curves with minimally varying curvature as piecewise clothoids. A stable and efficient algorithm fits a sketched piecewise linear curve using a number of clothoid segments with G2 continuity based on a specified error tolerance. Further, adjacent clothoid segments can be locally blended to result in a G3 curve with curvature that predominantly varies linearly with arc length. We also handle intended sharp corners or G1 discontinuities, as independent rotations of clothoid pieces. Our formulation is ideally suited to conceptual design applications where aesthetic fairness of the sketched curve takes precedence over the precise interpolation of geometric constraints. We show the effectiveness of our results within a system for sketch-based road and robot-vehicle path design, where clothoids are already widely used.",
"title": ""
},
{
"docid": "75bb8497138ef8e0bea1a56f7443791e",
"text": "Generative communication is the basis of a new distributed programming langauge that is intended for systems programming in distributed settings generally and on integrated network computers in particular. It differs from previous interprocess communication models in specifying that messages be added in tuple-structured form to the computation environment, where they exist as named, independent entities until some process chooses to receive them. Generative communication results in a number of distinguishing properties in the new language, Linda, that is built around it. Linda is fully distributed in space and distributed in time; it allows distributed sharing, continuation passing, and structured naming. We discuss these properties and their implications, then give a series of examples. Linda presents novel implementation problems that we discuss in Part II. We are particularly concerned with implementation of the dynamic global name space that the generative communication model requires.",
"title": ""
},
{
"docid": "3e14ca940db87b6d6be7017704be13e1",
"text": "Digital Twin models are computerized clones of physical assets that can be used for in-depth analysis. Industrial production lines tend to have multiple sensors to generate near real-time status information for production. Industrial Internet of Things datasets are difficult to analyze and infer valuable insights such as points of failure, estimated overhead. etc. In this paper we introduce a simple way of formalizing knowledge as digital twin models coming from sensors in industrial production lines. We present a way on to extract and infer knowledge from large scale production line data, and enhance manufacturing process management with reasoning capabilities, by introducing a semantic query mechanism. Our system primarily utilizes a graph-based query language equivalent to conjunctive queries and has been enriched with inference rules.",
"title": ""
},
{
"docid": "6762134c344053fb167ea286e21995f3",
"text": "Image processing techniques are widely used in the domain of medical sciences for detecting various diseases, infections, tumors, cell abnormalities and various cancers. Detecting and curing a dise ase on time is very important in the field of medicine for protecting and saving human life. Mostly in case of high severity diseases where the mortality rates are more, the waiting time of patients for their reports such as blood test, MRI is more. The time taken for generation of any of the test is from 1-10 days. In high risk diseases like Hepatitis B, it is recommended that the patient’s waiting time should be as less as possible and the treatment should be started immediately. The current system used by the pathologists for identification of blood parameters is costly and the time involved in generation of the reports is also more sometimes leading to loss of patient’s life. Also the pathological tests are expensive, which are sometimes not affordable by the patient. This paper deals with an image processing technique used for detecting the abnormalities of blood cells in less time. The proposed technique also helps in segregating the blood cells in different categories based on the form factor.",
"title": ""
},
{
"docid": "c7857bde224ef6252602798c349beb44",
"text": "Context Several studies show that people with low health literacy skills have poorer health-related knowledge and comprehension. Contribution This updated systematic review of 96 studies found that low health literacy is associated with poorer ability to understand and follow medical advice, poorer health outcomes, and differential use of some health care services. Caution No studies examined the relationship between oral literacy (speaking and listening skills) and outcomes. Implication Although it is challenging, we need to find feasible ways to improve patients' health literacy skills and reduce the negative effects of low health literacy on outcomes. The Editors The term health literacy refers to a set of skills that people need to function effectively in the health care environment (1). These skills include the ability to read and understand text and to locate and interpret information in documents (print literacy); use quantitative information for tasks, such as interpreting food labels, measuring blood glucose levels, and adhering to medication regimens (numeracy); and speak and listen effectively (oral literacy) (2, 3). Approximately 80 million U.S. adults are thought to have limited health literacy, which puts them at risk for poorer health outcomes. Rates of limited health literacy are higher among elderly, minority, and poor persons and those with less than a high school education (4). Numerous policy and advocacy organizations have expressed concern about barriers caused by low health literacy, notably the Institute of Medicine's report Health Literacy: A Prescription to End Confusion in 2004 (5) and the U.S. Department of Health and Human Services' report National Action Plan to Improve Health Literacy in 2010 (6). To understand the relationship between health literacy level and use of health care services, health outcomes, costs, and disparities in health outcomes, we conducted a systematic evidence review for the Agency for Healthcare Research and Quality (AHRQ) (published in 2004), which was limited to the relationship between print literacy and health outcomes (7). We found a consistent association between low health literacy (measured by reading skills) and more limited health-related knowledge and comprehension. The relationship between health literacy level and other outcomes was less clear, primarily because of a lack of studies and relatively unsophisticated methods in the available studies. In this review, we update and expand the earlier review (7). Since 2004, researchers have conducted new and more sophisticated studies. Thus, in synthesizing the literature, we can now consider the relationship between outcomes and health literacy (print literacy alone or combined with numeracy) and between outcomes and the numeracy component of health literacy alone. Methods We developed and followed a protocol that used standard AHRQ Evidence-based Practice Center methods. The full report describes study methods in detail and presents evidence tables for each included study (1). Literature Search We searched MEDLINE, CINAHL, the Cochrane Library, PsycINFO, and ERIC databases. For health literacy, our search dates were from 2003 to May 2010. For numeracy, they were from 1966 to May 2010; we began at an earlier date because numeracy was not addressed in our 2004 review. For this review, we updated our searches beyond what was included in the full report from May 2010 through 22 February 2011 to be current with the most recent literature. No Medical Subject Heading terms specifically identify health literacyrelated articles, so we conducted keyword searches, including health literacy, literacy, numeracy, and terms or phrases used to identify related measurement instruments. We also hand-searched reference lists of pertinent review articles and editorials. Appendix Table 1 shows the full search strategy. Appendix Table 1. Search Strategy Study Selection We included English-language studies on persons of all ages whose health literacy or that of their caregivers (including numeracy or oral health literacy) had been measured directly and had not been self-reported. Studies had to compare participants in relation to an outcome, including health care access and service use, health outcomes, and costs of care. For numeracy studies, outcomes also included knowledge, because our earlier review had established the relationship between only health literacy and knowledge. We did not examine outcomes concerning attitudes, social norms, or patientprovider relationships. Data Abstraction and Quality Assessment After determining article inclusion, 1 reviewer entered study data into evidence tables; a second, senior reviewer checked the information for accuracy and completeness. Two reviewers independently rated the quality of studies as good, fair, or poor by using criteria designed to detect potential risk of bias in an observational study (including selection bias, measurement bias, and control for potential confounding) and precision of measurement. Data Synthesis and Strength of Evidence We assessed the overall strength of the evidence for each outcome separately for studies measuring health literacy and those measuring numeracy on the basis of information only from good- and fair-quality studies. Using AHRQ guidance (8), we graded the strength of evidence as high, moderate, low, or insufficient on the basis of the potential risk of bias of included studies, consistency of effect across studies, directness of the evidence, and precision of the estimate (Table 1). We determined the grade on the basis of the literature from the update searches. We then considered whether the findings from the 2004 review would alter our conclusions. We graded the body of evidence for an outcome as low if the evidence was limited to 1 study that controlled for potential confounding variables or to several small studies in which all, or only some, controlled for potential confounding variables or as insufficient if findings across studies were inconsistent or were limited to 1 unadjusted study. Because of heterogeneity across studies in their approaches to measuring health literacy, numeracy, and outcomes, we summarized the evidence through consensus discussions and did not conduct any meta-analyses. Table 1. Strength of Evidence Grades and Definitions Role of the Funding Source AHRQ reviewed a draft report and provided copyright release for this manuscript. The funding source did not participate in conducting literature searches, determining study eligibility, evaluating individual studies, grading evidence, or interpreting results. Results First, we present the results from our literature search and a summary of characteristics across studies, followed by findings specific to health literacy then numeracy. We generally highlight evidence of moderate or high strength and mention only outcomes with low or insufficient evidence. Where relevant, we comment on the evidence provided through the 2004 review. Tables 2 and 3 summarize our findings and strength-of-evidence grade for each included health literacy and numeracy outcome, respectively. Table 2. Health Literacy Outcome Results: Strength of Evidence and Summary of Findings, 2004 and 2011 Table 3. Numeracy Outcome Results: Strength of Evidence and Summary of Findings, 2011 Characteristics of Reviewed Studies We identified 3823 citations and evaluated 1012 full-text articles (Appendix Figure). Ultimately, we included 96 studies rated as good or fair quality. These studies were reported in 111 articles because some investigators reported study results in multiple publications (98 articles on health literacy, 22 on numeracy, and 9 on both). We found no studies that examined outcomes by the oral (verbal) component of health literacy. Of the 111 articles, 100 were rated as fair quality. All studies were observational, primarily cross-sectional designs (91 of 111 articles). The Supplement (health literacy) and Appendix Table 2 (numeracy) present summary information for each included article. Supplement. Overview of Health Literacy Studies Appendix Figure. Summary of evidence search and selection. KQ = key question. Appendix Table 2. Overview of Numeracy Studies Studies varied in their measurement of health literacy and numeracy. Commonly used instruments to measure health literacy are the Rapid Estimate of Adult Literacy in Medicine (REALM) (9), the Test of Functional Health Literacy in Adults (TOFHLA) (10), and short TOFHLA (S-TOFHLA). Instruments frequently used to measure numeracy are the SchwartzWoloshin Numeracy Test (11) and the Wide Range Achievement Test (WRAT) math subtest (12). Studies also differed in how investigators distinguished between levels or thresholds of health literacyeither as a continuous measure or as categorical groups. Some studies identified 3 groups, often called inadequate, marginal, and adequate, whereas others combined 2 of the 3 groups. Because evidence was sparse for evaluating differences between marginal and adequate health literacy, our results focus on the differences between the lowest and highest groups. Studies in this update generally included multivariate analyses rather than simpler unadjusted analyses. They varied considerably, however, in regard to which potential confounding variables are controlled (Supplement and Appendix Table 2). All results reported here are from adjusted analyses that controlled for potential confounding variables, unless otherwise noted. Relationship Between Health Literacy and Outcomes Use of Health Care Services and Access to Care Emergency Care and Hospitalizations. Nine studies examining the risk for emergency care use (1321) and 6 examining the risk for hospitalizations (1419) provided moderate evidence showing increased use of both services among people with lower health literacy, including elderly persons, clinic and inner-city hospital patients, patients with asthma, and patients with congestive heart failure.",
"title": ""
},
{
"docid": "3f1f3e66fa1a117ef5c2f44d8f7dcbe8",
"text": "The Softmax function is used in the final layer of nearly all existing sequence-tosequence models for language generation. However, it is usually the slowest layer to compute which limits the vocabulary size to a subset of most frequent types; and it has a large memory footprint. We propose a general technique for replacing the softmax layer with a continuous embedding layer. Our primary innovations are a novel probabilistic loss, and a training and inference procedure in which we generate a probability distribution over pre-trained word embeddings, instead of a multinomial distribution over the vocabulary obtained via softmax. We evaluate this new class of sequence-to-sequence models with continuous outputs on the task of neural machine translation. We show that our models train up to 2.5x faster than the state-of-the-art models while achieving comparable translation quality. These models are capable of handling very large vocabularies without compromising on translation quality or speed. They also produce more meaningful errors than the softmax-based models, as these errors typically lie in a subspace of the vector space of the reference translations1.",
"title": ""
},
{
"docid": "8f75cc71e07209029947be095bf12b48",
"text": "BACKGROUND\nGastroGard, an omeprazole powder paste formulation, is considered the standard treatment for gastric ulcers in horses and is highly effective. Gastrozol, an enteric-coated omeprazole formulation for horses, has recently become available, but efficacy data are controversial and sparse.\n\n\nOBJECTIVES\nTo investigate the efficacy of GastroGard and Gastrozol at labeled doses (4 and 1 mg of omeprazole per kg bwt, respectively, PO q24h) in healing of gastric ulcers.\n\n\nANIMALS\n40 horses; 9.5 ± 4.6 years; 491 ± 135 kg.\n\n\nMETHODS\nProspective, randomized, blinded study. Horses with an ulcer score ≥1 (Equine Gastric Ulcer Council) were randomly divided into 2 groups and treated for 2 weeks each with GastroGard followed by Gastrozol (A) or vice versa (B). After 2 and 4 weeks, scoring was repeated and compared with baseline. Plasma omeprazole concentrations were measured on the first day of treatment after administration of GastroGard (n = 5) or Gastrozol (n = 5).\n\n\nRESULTS\nCompared with baseline (squamous score (A) 1.65 ± 0.11, (B) 1.98 ± 0.11), ulcer scores at 2 weeks ((A) 0.89 ± 0.11, (B) 1.01 ± 0.11) and 4 weeks ((A) 1.10 ± 0.12, (B) 0.80 ± 0.12) had significantly decreased in both groups (P < .001), independent of treatment (P = .7). Plasma omeprazole concentrations were significantly higher after GastroGard compared with Gastrozol administration (AUCGG = 2856 (1405-4576) ng/mL × h, AUCGZ = 604 (430-1609) ng/mL × h; P = .03). The bioavailability for Gastrozol was 1.26 (95% CI 0.56-2.81) times higher than for GastroGard.\n\n\nCONCLUSIONS AND CLINICAL IMPORTANCE\nBoth Gastrozol and GastroGard, combined with appropriate environmental changes, promote healing of gastric ulcers in horses. However, despite enteric coating of Gastrozol, plasma omeprazole concentrations after single labeled doses were significantly higher with GastroGard.",
"title": ""
},
{
"docid": "d0c43cf66df910094195bc3476cb8fa7",
"text": "Global information systems development has become increasingly prevalent and is facing a variety of challenges, including the challenge of cross-cultural management. However, research on exactly how cross-cultural factors affect global information systems development work is limited, especially with respect to distributed collaborative work between the U.S. and China. This paper draws on the interviews of Chinese IT professionals and discusses three emergent themes relevant to cross-cultural challenges: the complexity of language issues, culture and communication styles and work behaviors, and cultural understandings at different levels. Implications drawn from our findings will provide actionable knowledge to organizational management entities.",
"title": ""
},
{
"docid": "0c3fa5b92d95abb755f12dda030474c2",
"text": "This paper examines the hypothesis that the persistence of low spatial and marital mobility in rural India, despite increased growth rates and rising inequality in recent years, is due to the existence of sub-caste networks that provide mutual insurance to their members. Unique panel data providing information on income, assets, gifts, loans, consumption, marriage, and migration are used to link caste networks to household and aggregate mobility. Our key finding, consistent with the hypothesis that local risk-sharing networks restrict mobility, is that among households with the same (permanent) income, those in higher-income caste networks are more likely to participate in caste-based insurance arrangements and are less likely to both out-marry and out-migrate. At the aggregate level, the networks appear to have coped successfully with the rising inequality within sub-castes that accompanied the Green Revolution. The results suggest that caste networks will continue to smooth consumption in rural India for the foreseeable future, as they have for centuries, unless alternative consumption-smoothing mechanisms of comparable quality become available. ∗We are very grateful to Andrew Foster for many useful discussions that substantially improved the paper. We received helpful comments from Jan Eeckhout, Rachel Kranton, Ethan Ligon and seminar participants at Arizona, Chicago, Essex, Georgetown, Harvard, IDEI, ITAM, LEA-INRA, LSE, Ohio State, UCLA, and NBER. Alaka Holla provided excellent research assistance. Research support from NICHD grant R01-HD046940 and NSF grant SES-0431827 is gratefully acknowledged. †Brown University and NBER ‡Yale University",
"title": ""
},
{
"docid": "c1438a335a41da3b61e6ca1100b97074",
"text": "What dimensions can be identified in the trust formation processes in Business-to-Consumer (B-to-C) electronic commerce (e-commerce)? How do these differ in importance between academia and practitioners? The purpose of this research is to build a model of multidimensional trust formation for online exchanges in B-to-C electronic commerce. Further, to study the relative importance of the dimensions between two expert groups (academics and practitioners), two semantic network and content analyses are conducted: one for academia’s perspectives and another for practitioners’ perspectives of trust in B-to-C electronic commerce. The results show that the two perspectives are divergent in some ways and complementary in other ways. We believe that the two need to be combined to represent meaningful trust-building mechanisms in websites. D 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3b145aa14e1052467f78b911cda4109b",
"text": "Dual Connectivity(DC) is one of the key technologies standardized in Release 12 of the 3GPP specifications for the Long Term Evolution (LTE) network. It attempts to increase the per-user throughput by allowing the user equipment (UE) to maintain connections with the MeNB (master eNB) and SeNB (secondary eNB) simultaneously, which are inter-connected via non-ideal backhaul. In this paper, we focus on one of the use cases of DC whereby the downlink U-plane data is split at the MeNB and transmitted to the UE via the associated MeNB and SeNB concurrently. In this case, out-of-order packet delivery problem may occur at the UE due to the delay over the non-ideal backhaul link, as well as the dynamics of channel conditions over the MeNB-UE and SeNB-UE links, which will introduce extra delay for re-ordering the packets. As a solution, we propose to adopt the RaptorQ FEC code to encode the source data at the MeNB, and then the encoded symbols are separately transmitted through the MeNB and SeNB. The out-of-order problem can be effectively eliminated since the UE can decode the original data as long as it receives enough encoded symbols from either the MeNB or SeNB. We present detailed protocol design for the RaptorQ code based concurrent transmission scheme, and simulation results are provided to illustrate the performance of the proposed scheme.",
"title": ""
}
] |
scidocsrr
|
84ceca462bb655e036cc43e9b1124984
|
Computing on the Edge of Chaos: Structure and Randomness in Encrypted Computation
|
[
{
"docid": "d92b7ee3739843c2649d0f3f1e0ee5b2",
"text": "In this short note we observe that the Peikert-Vaikuntanathan-Waters (PVW) method of packing many plaintext elements in a single Regev-type ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the Smart-Vercauteren (SV) ciphertextpacking technique that relies on polynomial-CRT. While the SV technique is only applicable to schemes that rely on ring-LWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of “General-LWE”). Although using the PVW method with LWE-based schemes leads to worse asymptotic efficiency than using the SV technique with ring-LWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with “general-LWE” schemes, suggesting yet another tradeoff that can be optimized for different settings. Acknowledgments The first author is sponsored by DARPA under agreement number FA8750-11-C-0096. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. The second and third authors are sponsored by DARPA and ONR under agreement number N00014-11C-0390. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, or the U.S. Government. Distribution Statement “A” (Approved for Public Release, Distribution Unlimited).",
"title": ""
},
{
"docid": "5b0eef5eed1645ae3d88bed9b20901b9",
"text": "We present a radically new approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry’s bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2 security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with Õ(λ · L) per-gate computation – i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is Õ(λ), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results for LWE, but with worse performance. We introduce a number of further optimizations to our schemes. As an example, for circuits of large width – e.g., where a constant fraction of levels have width at least λ – we can reduce the per-gate computation of the bootstrapped version to Õ(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω̃(λ) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011). ∗Sponsored by the Air Force Research Laboratory (AFRL). Disclaimer: This material is based on research sponsored by DARPA under agreement number FA8750-11-C-0096 and FA8750-11-2-0225. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. Approved for Public Release, Distribution Unlimited. †This material is based on research sponsored by DARPA under Agreement number FA8750-11-2-0225. All disclaimers as above apply.",
"title": ""
}
] |
[
{
"docid": "a1494d0c89a4eca3ef4d38d577f5621a",
"text": "Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the ground-truth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. In addition, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: foreground object segmentation and object proposal detection.",
"title": ""
},
{
"docid": "4b988535edefeb3ff7df89bcb900dd1c",
"text": "Context: As a result of automated software testing, large amounts of software test code (script) are usually developed by software teams. Automated test scripts provide many benefits, such as repeatable, predictable, and efficient test executions. However, just like any software development activity, development of test scripts is tedious and error prone. We refer, in this study, to all activities that should be conducted during the entire lifecycle of test-code as Software Test-Code Engineering (STCE). Objective: As the STCE research area has matured and the number of related studies has increased, it is important to systematically categorize the current state-of-the-art and to provide an overview of the trends in this field. Such summarized and categorized results provide many benefits to the broader community. For example, they are valuable resources for new researchers (e.g., PhD students) aiming to conduct additional secondary studies. Method: In this work, we systematically classify the body of knowledge related to STCE through a systematic mapping (SM) study. As part of this study, we pose a set of research questions, define selection and exclusion criteria, and systematically develop and refine a systematic map. Results: Our study pool includes a set of 60 studies published in the area of STCE between 1999 and 2012. Our mapping data is available through an online publicly-accessible repository. We derive the trends for various aspects of STCE. Among our results are the following: (1) There is an acceptable mix of papers with respect to different contribution facets in the field of STCE and the top two leading facets are tool (68%) and method (65%). The studies that presented new processes, however, had a low rate (3%), which denotes the need for more process-related studies in this area. (2) Results of investigation about research facet of studies and comparing our result to other SM studies shows that, similar to other fields in software engineering, STCE is moving towards more rigorous validation approaches. (3) A good mixture of STCE activities has been presented in the primary studies. Among them, the two leading activities are quality assessment and co-maintenance of test-code with production code. The highest growth rate for co-maintenance activities in recent years shows the importance and challenges involved in this activity. (4) There are two main categories of quality assessment activity: detection of test smells and oracle assertion adequacy. (5) JUnit is the leading test framework which has been used in about 50% of the studies. (6) There is a good mixture of SUT types used in the studies: academic experimental systems (or simple code examples), real open-source and commercial systems. (7) Among 41 tools that are proposed for STCE, less than half of the tools (45%) were available for download. It is good to have this percentile of tools to be available, although not perfect, since the availability of tools can lead to higher impact on research community and industry. Conclusion: We discuss the emerging trends in STCE, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing STCE approaches and spot areas in the field that require more attention from the",
"title": ""
},
{
"docid": "00b851715df7fe4878f74796df9d8061",
"text": "Low duty-cycle mobile systems can benefit from ultra-low power deep neural network (DNN) accelerators. Analog in-memory computational units are used to store synaptic weights in on-chip non-volatile arrays and perform current-based calculations. In-memory computation entirely eliminates off-chip weight accesses, parallelizes operation, and amortizes readout power costs by reusing currents. The proposed system achieves 900nW measured power, with an estimated energy efficiency of 0.012pJ/MAC in a 130nm SONOS process.",
"title": ""
},
{
"docid": "f5fd1d6f15c9ef06c343378a6f7038a0",
"text": "Wayfinding is part of everyday life. This study concentrates on the development of a conceptual model of human navigation in the U.S. Interstate Highway Network. It proposes three different levels of conceptual understanding that constitute the cognitive map: the Planning Level, the Instructional Level, and the Driver Level. This paper formally defines these three levels and examines the conceptual objects that comprise them. The problem treated here is a simpler version of the open problem of planning and navigating a multi-mode trip. We expect the methods and preliminary results found here for the Interstate system to apply to other systems such as river transportation networks and railroad networks.",
"title": ""
},
{
"docid": "36a0b3223b83927f4dfe358086f2a660",
"text": "We train a set of state of the art neural networks, the Maxout networks (Goodfellow et al., 2013a), on three benchmark datasets: the MNIST, CIFAR10 and SVHN, with three distinct storing formats: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those formats, we assess the impact of the precision of the storage on the final error of the training. We find that very low precision storage is sufficient not just for running trained networks but also for training them. For example, Maxout networks state-of-the-art results are nearly maintained with 10 bits for storing activations and gradients, and 12 bits for storing parameters.",
"title": ""
},
{
"docid": "a89c471c0ad38741eaf48a83970da456",
"text": "Phenotypic and functional heterogeneity arise among cancer cells within the same tumour as a consequence of genetic change, environmental differences and reversible changes in cell properties. Some cancers also contain a hierarchy in which tumorigenic cancer stem cells differentiate into non-tumorigenic progeny. However, it remains unclear what fraction of cancers follow the stem-cell model and what clinical behaviours the model explains. Studies using lineage tracing and deep sequencing could have implications for the cancer stem-cell model and may help to determine the extent to which it accounts for therapy resistance and disease progression.",
"title": ""
},
{
"docid": "7d01463ce6dd7e7e08ebaf64f6916b1d",
"text": "An effective location algorithm, which considers nonline-of-sight (NLOS) propagation, is presented. By using a new variable to replace the square term, the problem becomes a mathematical programming problem, and then the NLOS propagation’s effect can be evaluated. Compared with other methods, the proposed algorithm has high accuracy.",
"title": ""
},
{
"docid": "7d0badaeeb94658690f0809c134d3963",
"text": "Vascular tissue engineering is an area of regenerative medicine that attempts to create functional replacement tissue for defective segments of the vascular network. One approach to vascular tissue engineering utilizes seeding of biodegradable tubular scaffolds with stem (and/or progenitor) cells wherein the seeded cells initiate scaffold remodeling and prevent thrombosis through paracrine signaling to endogenous cells. Stem cells have received an abundance of attention in recent literature regarding the mechanism of their paracrine therapeutic effect. However, very little of this mechanistic research has been performed under the aegis of vascular tissue engineering. Therefore, the scope of this review includes the current state of TEVGs generated using the incorporation of stem cells in biodegradable scaffolds and potential cell-free directions for TEVGs based on stem cell secreted products. The current generation of stem cell-seeded vascular scaffolds are based on the premise that cells should be obtained from an autologous source. However, the reduced regenerative capacity of stem cells from certain patient groups limits the therapeutic potential of an autologous approach. This limitation prompts the need to investigate allogeneic stem cells or stem cell secreted products as therapeutic bases for TEVGs. The role of stem cell derived products, particularly extracellular vesicles (EVs), in vascular tissue engineering is exciting due to their potential use as a cell-free therapeutic base. EVs offer many benefits as a therapeutic base for functionalizing vascular scaffolds such as cell specific targeting, physiological delivery of cargo to target cells, reduced immunogenicity, and stability under physiological conditions. However, a number of points must be addressed prior to the effective translation of TEVG technologies that incorporate stem cell derived EVs such as standardizing stem cell culture conditions, EV isolation, scaffold functionalization with EVs, and establishing the therapeutic benefit of this combination treatment.",
"title": ""
},
{
"docid": "d3883fe900e7b541b17990fb8533832f",
"text": "\"Environmental impact assessment\" denotes the attempt to predict and assess the impact of development projects on the environment. A component dealing specifically with human health is often called an \"environmental health impact assessment.\" It is widely held that such impact assessment offers unique opportunities for the protection and promotion of human health. The following components were identified as key elements of an integrated environmental health impact assessment model: project analysis, analysis of status quo (including regional analysis, population analysis, and background situation), prediction of impact (including prognosis of future pollution and prognosis of health impact), assessment of impact, recommendations, communication of results, and evaluation of the overall procedure. The concept was applied to a project of extending a waste disposal facility and to a city bypass highway project. Currently, the coverage of human health aspects in environmental impact assessment still tends to be incomplete, and public health departments often do not participate. Environmental health impact assessment as a tool for health protection and promotion is underutilized. It would be useful to achieve consensus on a comprehensive generic concept. An international initiative to improve the situation seems worth some consideration.",
"title": ""
},
{
"docid": "4de1ea43b95330901620bd2f69865029",
"text": "Recent trends towards increasing complexity in distributed embedded real-time systems pose challenges in designing and implementing a reliable system such as a self-driving car. The conventional way of improving reliability is to use redundant hardware to replicate the whole (sub)system. Although hardware replication has been widely deployed in hard real-time systems such as avionics, space shuttles and nuclear power plants, it is significantly less attractive to many applications because the amount of necessary hardware multiplies as the size of the system increases. The growing needs of flexible system design are also not consistent with hardware replication techniques. To address the needs of dependability through redundancy operating in real-time, we propose a layer called SAFER(System-level Architecture for Failure Evasion in Real-time applications) to incorporate configurable task-level fault-tolerance features to tolerate fail-stop processor and task failures for distributed embedded real-time systems. To detect such failures, SAFER monitors the health status and state information of each task and broadcasts the information. When a failure is detected using either time-based failure detection or event-based failure detection, SAFER reconfigures the system to retain the functionality of the whole system. We provide a formal analysis of the worst-case timing behaviors of SAFER features. We also describe the modeling of a system equipped with SAFER to analyze timing characteristics through a model-based design tool called SysWeaver. SAFER has been implemented on Ubuntu 10.04 LTS and deployed on Boss, an award-winning autonomous vehicle developed at Carnegie Mellon University. We show various measurements using simulation scenarios used during the 2007 DARPA Urban Challenge. Finally, we present a case study of failure recovery by SAFER when node failures are injected.",
"title": ""
},
{
"docid": "611eacd767f1ea709c1c4aca7acdfcdb",
"text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.",
"title": ""
},
{
"docid": "e95257d099750281c83d98af2e194b34",
"text": "This paper presents a real-coded memetic algorithm that applies a crossover hill-climbing to solutions produced by the genetic operators. On the one hand, the memetic algorithm provides global search (reliability) by means of the promotion of high levels of population diversity. On the other, the crossover hill-climbing exploits the self-adaptive capacity of real-parameter crossover operators with the aim of producing an effective local tuning on the solutions (accuracy). An important aspect of the memetic algorithm proposed is that it adaptively assigns different local search probabilities to individuals. It was observed that the algorithm adjusts the global/local search balance according to the particularities of each problem instance. Experimental results show that, for a wide range of problems, the method we propose here consistently outperforms other real-coded memetic algorithms which appeared in the literature.",
"title": ""
},
{
"docid": "65bc99201599ec17347d3fe0857cd39a",
"text": "Many children strive to attain excellence in sport. However, although talent identification and development programmes have gained popularity in recent decades, there remains a lack of consensus in relation to how talent should be defined or identified and there is no uniformly accepted theoretical framework to guide current practice. The success rates of talent identification and development programmes have rarely been assessed and the validity of the models applied remains highly debated. This article provides an overview of current knowledge in this area with special focus on problems associated with the identification of gifted adolescents. There is a growing agreement that traditional cross-sectional talent identification models are likely to exclude many, especially late maturing, 'promising' children from development programmes due to the dynamic and multidimensional nature of sport talent. A conceptual framework that acknowledges both genetic and environmental influences and considers the dynamic and multidimensional nature of sport talent is presented. The relevance of this model is highlighted and recommendations for future work provided. It is advocated that talent identification and development programmes should be dynamic and interconnected taking into consideration maturity status and the potential to develop rather than to exclude children at an early age. Finally, more representative real-world tasks should be developed and employed in a multidimensional design to increase the efficacy of talent identification and development programmes.",
"title": ""
},
{
"docid": "768a8cfff3f127a61f12139466911a94",
"text": "The metabolism of NAD has emerged as a key regulator of cellular and organismal homeostasis. Being a major component of both bioenergetic and signaling pathways, the molecule is ideally suited to regulate metabolism and major cellular events. In humans, NAD is synthesized from vitamin B3 precursors, most prominently from nicotinamide, which is the degradation product of all NAD-dependent signaling reactions. The scope of NAD-mediated regulatory processes is wide including enzyme regulation, control of gene expression and health span, DNA repair, cell cycle regulation and calcium signaling. In these processes, nicotinamide is cleaved from NAD(+) and the remaining ADP-ribosyl moiety used to modify proteins (deacetylation by sirtuins or ADP-ribosylation) or to generate calcium-mobilizing agents such as cyclic ADP-ribose. This review will also emphasize the role of the intermediates in the NAD metabolome, their intra- and extra-cellular conversions and potential contributions to subcellular compartmentalization of NAD pools.",
"title": ""
},
{
"docid": "36db2c06d65576e03e00017a9060fd24",
"text": "Real-world relations among entities can oen be observed and determined by different perspectives/views. For example, the decision made by a user on whether to adopt an item relies on multiple aspects such as the contextual information of the decision, the item’s aributes, the user’s profile and the reviews given by other users. Different views may exhibit multi-way interactions among entities and provide complementary information. In this paper, we introduce a multi-tensor-based approach that can preserve the underlying structure of multi-view data in a generic predictive model. Specifically, we propose structural factorization machines (SFMs) that learn the common latent spaces shared by multi-view tensors and automatically adjust the importance of each view in the predictive model. Furthermore, the complexity of SFMs is linear in the number of parameters, which make SFMs suitable to large-scale problems. Extensive experiments on real-world datasets demonstrate that the proposed SFMs outperform several state-of-the-art methods in terms of prediction accuracy and computational cost. CCS CONCEPTS •Computingmethodologies→Machine learning; Supervised learning; Factorization methods;",
"title": ""
},
{
"docid": "c9ad1daa4ee0d900c1a2aa9838eb9918",
"text": "A central question in human development is how young children gain knowledge so fast. We propose that analogical generalization drives much of this early learning and allows children to generate new abstractions from experience. In this paper, we review evidence for analogical generalization in both children and adults. We discuss how analogical processes interact with the child's changing knowledge base to predict the course of learning, from conservative to domain-general understanding. This line of research leads to challenges to existing assumptions about learning. It shows that (a) it is not enough to consider the distribution of examples given to learners; one must consider the processes learners are applying; (b) contrary to the general assumption, maximizing variability is not always the best route for maximizing generalization and transfer.",
"title": ""
},
{
"docid": "7605c3ae299d7e23c383eea352da81da",
"text": "Demands for very high system capacity and end-user data rates of the order of 10 Gb/s can be met in localized environments by Ultra-Dense Networks (UDN), characterized as networks with very short inter-site distances capable of ensuring low interference levels during communications. UDNs are expected to operate in the millimeter-wave band, where wide bandwidth signals needed for such high data rates can be designed, and will rely on high-gain beamforming to mitigate path loss and ensure low interference. The dense deployment of infrastructure nodes will make traditional wire-based backhaul provisioning challenging. Wireless self-backhauling over multiple hops is proposed to enhance flexibility in deployment. A description of the architecture and a concept based on separation of mobility, radio resource coordination among multiple nodes, and data plane handling, as well as on integration with wide-area networks, is introduced. A simulation of a multi-node office environment is used to demonstrate the performance of wireless self-backhauling at various loads.",
"title": ""
},
{
"docid": "8123ab525ce663e44b104db2cacd59a9",
"text": "Extractive summarization is the strategy of concatenating extracts taken from a corpus into a summary, while abstractive summarization involves paraphrasing the corpus using novel sentences. We define a novel measure of corpus controversiality of opinions contained in evaluative text, and report the results of a user study comparing extractive and NLG-based abstractive summarization at different levels of controversiality. While the abstractive summarizer performs better overall, the results suggest that the margin by which abstraction outperforms extraction is greater when controversiality is high, providing aion outperforms extraction is greater when controversiality is high, providing a context in which the need for generationbased methods is especially great.",
"title": ""
},
{
"docid": "2b53b125dc8c79322aabb083a9c991e4",
"text": "Geographical location is vital to geospatial applications like local search and event detection. In this paper, we investigate and improve on the task of text-based geolocation prediction of Twitter users. Previous studies on this topic have typically assumed that geographical references (e.g., gazetteer terms, dialectal words) in a text are indicative of its author’s location. However, these references are often buried in informal, ungrammatical, and multilingual data, and are therefore non-trivial to identify and exploit. We present an integrated geolocation prediction framework and investigate what factors impact on prediction accuracy. First, we evaluate a range of feature selection methods to obtain “location indicative words”. We then evaluate the impact of nongeotagged tweets, language, and user-declared metadata on geolocation prediction. In addition, we evaluate the impact of temporal variance on model generalisation, and discuss how users differ in terms of their geolocatability. We achieve state-of-the-art results for the text-based Twitter user geolocation task, and also provide the most extensive exploration of the task to date. Our findings provide valuable insights into the design of robust, practical text-based geolocation prediction systems.",
"title": ""
},
{
"docid": "7551b0023dd92888ac229ffda4dfd29e",
"text": "Ever since the inception of mobile telephony, the downlink and uplink of cellular networks have been coupled, that is, mobile terminals have been constrained to associate with the same base station in both the downlink and uplink directions. New trends in network densification and mobile data usage increase the drawbacks of this constraint, and suggest that it should be revisited. In this article we identify and explain five key arguments in favor of downlink/uplink decoupling based on a blend of theoretical, experimental, and architectural insights. We then overview the changes needed in current LTE-A mobile systems to enable this decoupling, and then look ahead to fifth generation cellular standards. We demonstrate that decoupling can lead to significant gains in network throughput, outage, and power consumption at a much lower cost compared to other solutions that provide comparable or lower gains.",
"title": ""
}
] |
scidocsrr
|
d93f93049619c519e11f4b4601712615
|
Gamifying Information Systems - a synthesis of Gamification mechanics and Dynamics
|
[
{
"docid": "4f6a6f633e512a33fc0b396765adcdf0",
"text": "Interactive systems often require calibration to ensure that input and output are optimally configured. Without calibration, user performance can degrade (e.g., if an input device is not adjusted for the user's abilities), errors can increase (e.g., if color spaces are not matched), and some interactions may not be possible (e.g., use of an eye tracker). The value of calibration is often lost, however, because many calibration processes are tedious and unenjoyable, and many users avoid them altogether. To address this problem, we propose calibration games that gather calibration data in an engaging and entertaining manner. To facilitate the creation of calibration games, we present design guidelines that map common types of calibration to core tasks, and then to well-known game mechanics. To evaluate the approach, we developed three calibration games and compared them to standard procedures. Users found the game versions significantly more enjoyable than regular calibration procedures, without compromising the quality of the data. Calibration games are a novel way to motivate users to carry out calibrations, thereby improving the performance and accuracy of many human-computer systems.",
"title": ""
},
{
"docid": "78e21364224b9aa95f86ac31e38916ef",
"text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "bf5f08174c55ed69e454a87ff7fbe6e2",
"text": "In much of the current literature on supply chain management, supply networks are recognized as a system. In this paper, we take this observation to the next level by arguing the need to recognize supply networks as a complex adaptive system (CAS). We propose that many supply networks emerge rather than result from purposeful design by a singular entity. Most supply chain management literature emphasizes negative feedback for purposes of control; however, the emergent patterns in a supply network can much better be managed through positive feedback, which allows for autonomous action. Imposing too much control detracts from innovation and flexibility; conversely, allowing too much emergence can undermine managerial predictability and work routines. Therefore, when managing supply networks, managers must appropriately balance how much to control and how much to let emerge. © 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "f9cddbf2b0df51aeaf240240bd324b33",
"text": "Grammatical agreement means that features associated with one linguistic unit (for example number or gender) become associated with another unit and then possibly overtly expressed, typically with morphological markers. It is one of the key mechanisms used in many languages to show that certain linguistic units within an utterance grammatically depend on each other. Agreement systems are puzzling because they can be highly complex in terms of what features they use and how they are expressed. Moreover, agreement systems have undergone considerable change in the historical evolution of languages. This article presents language game models with populations of agents in order to find out for what reasons and by what cultural processes and cognitive strategies agreement systems arise. It demonstrates that agreement systems are motivated by the need to minimize combinatorial search and semantic ambiguity, and it shows, for the first time, that once a population of agents adopts a strategy to invent, acquire and coordinate meaningful markers through social learning, linguistic self-organization leads to the spontaneous emergence and cultural transmission of an agreement system. The article also demonstrates how attested grammaticalization phenomena, such as phonetic reduction and conventionalized use of agreement markers, happens as a side effect of additional economizing principles, in particular minimization of articulatory effort and reduction of the marker inventory. More generally, the article illustrates a novel approach for studying how key features of human languages might emerge.",
"title": ""
},
{
"docid": "143da39941ecc8fb69e87d611503b9c0",
"text": "A dual-core 64b Xeonreg MP processor is implemented in a 65nm 8M process. The 435mm2 die has 1.328B transistors. Each core has two threads and a unified 1MB L2 cache. The 16MB unified, 16-way set-associative L3 cache implements both sleep and shut-off leakage reduction modes",
"title": ""
},
{
"docid": "e5f38cb3857c5101111c69d7318ebcbc",
"text": "Rotator cuff tendinitis is one of the main causes of shoulder pain. The objective of this study was to evaluate the possible additive effects of low-power laser treatment in combination with conventional physiotherapy endeavors in these patients. A total of 50 patients who were referred to the Physical Medicine and Rehabilitation Clinic with shoulder pain and rotator cuff disorders were selected. Pain severity measured with visual analogue scale (VAS), abduction, and external rotation range of motion in shoulder joint was measured by goniometry, and evaluation of daily functional abilities of patients was measured by shoulder disability questionnaire. Twenty-five of the above patients were randomly assigned into the control group and received only routine physiotherapy. The other 25 patients were assigned into the experimental group and received conventional therapy plus low-level laser therapy (4 J/cm2 at each point over a maximum of ten painful points of shoulder region for total 5 min duration). The above measurements were assessed at the end of the third week of therapy in each group and the results were analyzed statistically. In both groups, statistically significant improvement was detected in all outcome measures compared to baseline (p < 0.05). Comparison between two different groups revealed better results for control of pain (reduction in VAS average) and shoulder disability problems in the experimental group versus the control (3.1 ± 2.2 vs. 5 ± 2.6, p = 0.029 and 4.4 ± 3.1 vs. 8.5 ± 5.1, p = 0.031, respectively ) after intervention. Positive objective signs also had better results in the experimental group, but the mean range of active abduction (144.92 ± 31.6 vs. 132.80 ± 31.3) and external rotation (78.0 ± 19.5 vs. 76.3 ± 19.1) had no significant difference between the two groups (p = 0.20 and 0.77, respectively). As one of physical modalities, gallium-arsenide low-power laser combined with conventional physiotherapy has superiority over routine physiotherapy from the view of decreasing pain and improving the patient’s function, but no additional advantages were detected in increasing shoulder joint range of motion in comparison to other physical agents.",
"title": ""
},
{
"docid": "e1bd202db576085b70f0494d29791a5b",
"text": "Object class labelling is the task of annotating images with labels on the presence or absence of objects from a given class vocabulary. Simply asking one yes-no question per class, however, has a cost that is linear in the vocabulary size and is thus inefficient for large vocabularies. Modern approaches rely on a hierarchical organization of the vocabulary to reduce annotation time, but remain expensive (several minutes per image for the 200 classes in ILSVRC). Instead, we propose a new interface where classes are annotated via speech. Speaking is fast and allows for direct access to the class name, without searching through a list or hierarchy. As additional advantages, annotators can simultaneously speak and scan the image for objects, the interface can be kept extremely simple, and using it requires less mouse movement. However, a key challenge is to train annotators to only say words from the given class vocabulary. We present a way to tackle this challenge and show that our method yields high-quality annotations at significant speed gains (2.3− 14.9× faster than existing methods).",
"title": ""
},
{
"docid": "0485beab9d781e99046042a15ea913c5",
"text": "Systems for processing continuous monitoring queries over data streams must be adaptive because data streams are often bursty and data characteristics may vary over time. We focus on one particular type of adaptivity: the ability to gracefully degrade performance via \"load shedding\" (dropping unprocessed tuples to reduce system load) when the demands placed on the system cannot be met in full given available resources. Focusing on aggregation queries, we present algorithms that determine at what points in a query plan should load shedding be performed and what amount of load should be shed at each point in order to minimize the degree of inaccuracy introduced into query answers. We report the results of experiments that validate our analytical conclusions.",
"title": ""
},
{
"docid": "9e208e6beed62575a92f32031b7af8ad",
"text": "Recently, interests on cleaning robots workable in pipes (termed as in-pipe cleaning robot) are increasing because Garbage Automatic Collection Facilities (i.e, GACF) are widely being installed in Seoul metropolitan area of Korea. So far research on in-pipe robot has been focused on inspection rather than cleaning. In GACF, when garbage is moving, we have to remove the impurities which are stuck to the inner face of the pipe (diameter: 300mm or 400mm). Thus, in this paper, by using TRIZ (Inventive Theory of Problem Solving in Russian abbreviation), we will propose an in-pipe cleaning robot of GACF with the 6-link sliding mechanism which can be adjusted to fit into the inner face of pipe using pneumatic pressure(not spring). The proposed in-pipe cleaning robot for GACF can have forward/backward movement itself as well as rotation of brush in cleaning. The robot body should have the limited size suitable for the smaller pipe with diameter of 300mm. In addition, for the pipe with diameter of 400mm, the links of robot should stretch to fit into the diameter of the pipe by using the sliding mechanism. Based on the conceptual design using TRIZ, we will set up the initial design of the robot in collaboration with a field engineer of Robot Valley, Inc. in Korea. For the optimal design of in-pipe cleaning robot, the maximum impulsive force of collision between the robot and the inner face of pipe is simulated by using RecurDyn® when the link of sliding mechanism is stretched to fit into the 400mm diameter of the pipe. The stresses exerted on the 6 links of sliding mechanism by the maximum impulsive force will be simulated by using ANSYS® Workbench based on the Design Of Experiment(in short DOE). Finally the optimal dimensions including thicknesses of 4 links will be decided in order to have the best safety factor as 2 in this paper as well as having the minimum mass of 4 links. It will be verified that the optimal design of 4 links has the best safety factor close to 2 as well as having the minimum mass of 4 links, compared with the initial design performed by the expert of Robot Valley, Inc. In addition, the prototype of in-pipe cleaning robot will be stated with further research.",
"title": ""
},
{
"docid": "84646992c6de3b655f8ccd2bda3e6d4c",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A doi:10.1016/j.eswa.2012.02.064 ⇑ Corresponding author. E-mail addresses: [email protected] (R. C bo.it (M. Ferrara). This paper proposes a novel fingerprint retrieval system that combines level-1 (local orientation and frequencies) and level-2 (minutiae) features. Various scoreand rank-level fusion strategies and a novel hybrid fusion approach are evaluated. Extensive experiments are carried out on six public databases and a systematic comparison is made with eighteen retrieval methods and seventeen exclusive classification techniques published in the literature. The novel approach achieves impressive results: its retrieval accuracy is definitely higher than competing state-of-the-art methods, with error rates that in some cases are even one or two orders of magnitude smaller. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4de971edc8e677d554ae77f6976fc5d3",
"text": "With the widespread use of encrypted data transport network traffic encryption is becoming a standard nowadays. This presents a challenge for traffic measurement, especially for analysis and anomaly detection methods which are dependent on the type of network traffic. In this paper, we survey existing approaches for classification and analysis of encrypted traffic. First, we describe the most widespread encryption protocols used throughout the Internet. We show that the initiation of an encrypted connection and the protocol structure give away a lot of information for encrypted traffic classification and analysis. Then, we survey payload and feature-based classification methods for encrypted traffic and categorize them using an established taxonomy. The advantage of some of described classification methods is the ability to recognize the encrypted application protocol in addition to the encryption protocol. Finally, we make a comprehensive comparison of the surveyed feature-based classification methods and present their weaknesses and strengths. Copyright c © 2014 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "83f067159913e65410a054681461ab4d",
"text": "Cloud computing has revolutionized the way computing and software services are delivered to the clients on demand. It offers users the ability to connect to computing resources and access IT managed services with a previously unknown level of ease. Due to this greater level of flexibility, the cloud has become the breeding ground of a new generation of products and services. However, the flexibility of cloud-based services comes with the risk of the security and privacy of users' data. Thus, security concerns among users of the cloud have become a major barrier to the widespread growth of cloud computing. One of the security concerns of cloud is data mining based privacy attacks that involve analyzing data over a long period to extract valuable information. In particular, in current cloud architecture a client entrusts a single cloud provider with his data. It gives the provider and outside attackers having unauthorized access to cloud, an opportunity of analyzing client data over a long period to extract sensitive information that causes privacy violation of clients. This is a big concern for many clients of cloud. In this paper, we first identify the data mining based privacy risks on cloud data and propose a distributed architecture to eliminate the risks.",
"title": ""
},
{
"docid": "804ddcaf56ef34b0b578cc53d7cca304",
"text": "This review article describes two protocols adapted from lung ultrasound: the bedside lung ultrasound in emergency (BLUE)-protocol for the immediate diagnosis of acute respiratory failure and the fluid administration limited by lung sonography (FALLS)-protocol for the management of acute circulatory failure. These applications require the mastery of 10 signs indicating normal lung surface (bat sign, lung sliding, A-lines), pleural effusions (quad and sinusoid sign), lung consolidations (fractal and tissue-like sign), interstitial syndrome (lung rockets), and pneumothorax (stratosphere sign and the lung point). These signs have been assessed in adults, with diagnostic accuracies ranging from 90% to 100%, allowing consideration of ultrasound as a reasonable bedside gold standard. In the BLUE-protocol, profiles have been designed for the main diseases (pneumonia, congestive heart failure, COPD, asthma, pulmonary embolism, pneumothorax), with an accuracy > 90%. In the FALLS-protocol, the change from A-lines to lung rockets appears at a threshold of 18 mm Hg of pulmonary artery occlusion pressure, providing a direct biomarker of clinical volemia. The FALLS-protocol sequentially rules out obstructive, then cardiogenic, then hypovolemic shock for expediting the diagnosis of distributive (usually septic) shock. These applications can be done using simple grayscale machines and one microconvex probe suitable for the whole body. Lung ultrasound is a multifaceted tool also useful for decreasing radiation doses (of interest in neonates where the lung signatures are similar to those in adults), from ARDS to trauma management, and from ICUs to points of care. If done in suitable centers, training is the least of the limitations for making use of this kind of visual medicine.",
"title": ""
},
{
"docid": "733ddc5a642327364c2bccb6b1258fac",
"text": "Human memory is unquestionably a vital cognitive ability but one that can often be unreliable. External memory aids such as diaries, photos, alarms and calendars are often employed to assist in remembering important events in our past and future. The recent trend for lifelogging, continuously documenting ones life through wearable sensors and cameras, presents a clear opportunity to augment human memory beyond simple reminders and actually improve its capacity to remember. This article surveys work from the fields of computer science and psychology to understand the potential for such augmentation, the technologies necessary for realising this opportunity and to investigate what the possible benefits and ethical pitfalls of using such technology might be.",
"title": ""
},
{
"docid": "f9bd86958566868d2da17aad9c5029df",
"text": "A Multi-Agent System (MAS) is an organization of coordinated autonomous agents that interact in order to achieve common goals. Considering real world organizations as an analogy, this paper proposes architectural styles for MAS which adopt concepts from organization theory and strategic alliances literature. The styles are intended to represent a macro-level architecture of a MAS, and they are modeled using the i* framework which offers the notions of actor, goal and actor dependency for modeling multi-agent settings. The styles are also specified as metaconcepts in the Telos modeling language. Moreover, each style is evaluated with respect to a set of software quality attributes, such as predictability and adaptability. The paper also explores the adoption of micro-level patterns proposed elsewhere in order to give a finer-grain description of a MAS architecture. These patterns define how goals assigned to actors participating in an organizational architecture will be fulfilled by agents. An e-business example illustrates both the styles and patterns proposed in this work. The research is being conducted within the context of Tropos, a comprehensive software development methodology for agent-oriented software.",
"title": ""
},
{
"docid": "b181559966c55d90741f62e645b7d2f7",
"text": "BACKGROUND AND AIMS\nPsychological stress is associated with inflammatory bowel disease [IBD], but the nature of this relationship is complex. At present, there is no simple tool to screen for stress in IBD clinical practice or assess stress repeatedly in longitudinal studies. Our aim was to design a single-question 'stressometer' to rapidly measure stress and validate this in IBD patients.\n\n\nMETHODS\nIn all, 304 IBD patients completed a single-question 'stressometer'. This was correlated with stress as measured by the Depression Anxiety Stress Scales [DASS-21], quality of life, and disease activity. Test-retest reliability was assessed in 31 patients who completed the stressometer and the DASS-21 on two occasions 4 weeks apart.\n\n\nRESULTS\nStressometer levels correlated with the DASS-21 stress dimension in both Crohn's disease [CD] (Spearman's rank correlation coefficient [rs] 0.54; p < 0.001) and ulcerative colitis [UC] [rs 0.59; p < 0.001]. Stressometer levels were less closely associated with depression and anxiety [rs range 0.36 to 0.49; all p-values < 0.001]. Stressometer scores correlated with all four Short Health Scale quality of life dimensions in both CD and UC [rs range 0.35 to 0.48; all p-values < 0.001] and with disease activity in Crohn's disease [rs 0.46; p < 0.001] and ulcerative colitis [rs 0.20; p = 0.02]. Responsiveness was confirmed with a test-retest correlation of 0.43 [p = 0.02].\n\n\nCONCLUSIONS\nThe stressometer is a simple, valid, and responsive measure of psychological stress in IBD patients and may be a useful patient-reported outcome measure in future IBD clinical and research assessments.",
"title": ""
},
{
"docid": "f3b9269e3d6e6098384eda277129864c",
"text": "Action planning using learned and differentiable forward models of the world is a general approach which has a number of desirable properties, including improved sample complexity over modelfree RL methods, reuse of learned models across different tasks, and the ability to perform efficient gradient-based optimization in continuous action spaces. However, this approach does not apply straightforwardly when the action space is discrete. In this work, we show that it is in fact possible to effectively perform planning via backprop in discrete action spaces, using a simple paramaterization of the actions vectors on the simplex combined with input noise when training the forward model. Our experiments show that this approach can match or outperform model-free RL and discrete planning methods on gridworld navigation tasks in terms of performance and/or planning time while using limited environment interactions, and can additionally be used to perform model-based control in a challenging new task where the action space combines discrete and continuous actions. We furthermore propose a policy distillation approach which yields a fast policy network which can be used at inference time, removing the need for an iterative planning procedure.",
"title": ""
},
{
"docid": "46e8609b7cf5cfc970aa75fa54d3551d",
"text": "BACKGROUND\nAims were to assess the efficacy of metacognitive training (MCT) in people with a recent onset of psychosis in terms of symptoms as a primary outcome and metacognitive variables as a secondary outcome.\n\n\nMETHOD\nA multicenter, randomized, controlled clinical trial was performed. A total of 126 patients were randomized to an MCT or a psycho-educational intervention with cognitive-behavioral elements. The sample was composed of people with a recent onset of psychosis, recruited from nine public centers in Spain. The treatment consisted of eight weekly sessions for both groups. Patients were assessed at three time-points: baseline, post-treatment, and at 6 months follow-up. The evaluator was blinded to the condition of the patient. Symptoms were assessed with the PANSS and metacognition was assessed with a battery of questionnaires of cognitive biases and social cognition.\n\n\nRESULTS\nBoth MCT and psycho-educational groups had improved symptoms post-treatment and at follow-up, with greater improvements in the MCT group. The MCT group was superior to the psycho-educational group on the Beck Cognitive Insight Scale (BCIS) total (p = 0.026) and self-certainty (p = 0.035) and dependence self-subscale of irrational beliefs, comparing baseline and post-treatment. Moreover, comparing baseline and follow-up, the MCT group was better than the psycho-educational group in self-reflectiveness on the BCIS (p = 0.047), total BCIS (p = 0.045), and intolerance to frustration (p = 0.014). Jumping to Conclusions (JTC) improved more in the MCT group than the psycho-educational group (p = 0.021). Regarding the comparison within each group, Theory of Mind (ToM), Personalizing Bias, and other subscales of irrational beliefs improved in the MCT group but not the psycho-educational group (p < 0.001-0.032).\n\n\nCONCLUSIONS\nMCT could be an effective psychological intervention for people with recent onset of psychosis in order to improve cognitive insight, JTC, and tolerance to frustration. It seems that MCT could be useful to improve symptoms, ToM, and personalizing bias.",
"title": ""
},
{
"docid": "31c2dc8045f43c7bf1aa045e0eb3b9ad",
"text": "This paper addresses the task of functional annotation of genes from biomedical literature. We view this task as a hierarchical text categorization problem with Gene Ontology as a class hierarchy. We present a novel global hierarchical learning approach that takes into account the semantics of a class hierarchy. This algorithm with AdaBoost as the underlying learning procedure significantly outperforms the corresponding “flat” approach, i.e. the approach that does not consider any hierarchical information. In addition, we propose a novel hierarchical evaluation measure that gives credit to partially correct classification and discriminates errors by both distance and depth in a class hierarchy.",
"title": ""
},
{
"docid": "d6d8ef59feb54c76fdcc43b31b9bf5f8",
"text": "We consider the classical TD(0) algorithm implemented on a network of agents wherein the agents also incorporate updates received from neighboring agents using a gossip-like mechanism. The combined scheme is shown to converge for both discounted and average cost problems.",
"title": ""
},
{
"docid": "38a18bfce2cb33b390dd7c7cf5a4afd1",
"text": "Automatic photo assessment is a high emerging research field with wide useful ‘real-world’ applications. Due to the recent advances in deep learning, one can observe very promising approaches in the last years. However, the proposed solutions are adapted and optimized for ‘isolated’ datasets making it hard to understand the relationship between them and to benefit from the complementary information. Following a unifying approach, we propose in this paper a learning model that integrates the knowledge from different datasets. We conduct a study based on three representative benchmark datasets for photo assessment. Instead of developing for each dataset a specific model, we design and adapt sequentially a unique model which we nominate UNNA. UNNA consists of a deep convolutional neural network, that predicts for a given image three kinds of aesthetic information: technical quality, high-level semantical quality, and a detailed description of photographic rules. Due to the sequential adaptation that exploits the common features between the chosen datasets, UNNA has comparable performances with the state-of-the-art solutions with effectively less parameter. The final architecture of UNNA gives us some interesting indication of the kind of shared features as well as individual aspects of the considered datasets.",
"title": ""
},
{
"docid": "91ed0637e0533801be8b03d5ad21d586",
"text": "With the rapid development of modern wireless communication systems, the desirable miniaturization, multifunctionality strong harmonic suppression, and enhanced bandwidth of the rat-race coupler has generated much interest and continues to be a focus of research. Whether the current rat-race coupler is sufficient to adapt to the future development of microwave systems has become a heated topic.",
"title": ""
}
] |
scidocsrr
|
ecbea5f976b36a7d6e9cec541b9c6879
|
A Self-Service Supporting Business Intelligence and Big Data Analytics Architecture
|
[
{
"docid": "a44b74738723580f4056310d6856bb74",
"text": "This book covers the theory and principles of core avionic systems in civil and military aircraft, including displays, data entry and control systems, fly by wire control systems, inertial sensor and air data systems, navigation, autopilot systems an... Use the latest data mining best practices to enable timely, actionable, evidence-based decision making throughout your organization! Real-World Data Mining demystifies current best practices, showing how to use data mining to uncover hidden patterns ... Data Warehousing in the Age of the Big Data will help you and your organization make the most of unstructured data with your existing data warehouse. As Big Data continues to revolutionize how we use data, it doesn't have to create more confusion. Ex... This book explores the concepts of data mining and data warehousing, a promising and flourishing frontier in data base systems and new data base applications and is also designed to give a broad, yet ....",
"title": ""
}
] |
[
{
"docid": "88ea3f043b43a11a0a7d79e59a774c1f",
"text": "The purpose of this paper is to present an alternative systems thinking–based perspective and approach to the requirements elicitation process in complex situations. Three broad challenges associated with the requirements engineering elicitation in complex situations are explored, including the (1) role of the system observer, (2) nature of system requirements in complex situations, and (3) influence of the system environment. Authors have asserted that the expectation of unambiguous, consistent, complete, understandable, verifiable, traceable, and modifiable requirements is not consistent with complex situations. In contrast, complex situations are an emerging design reality for requirements engineering processes, marked by high levels of ambiguity, uncertainty, and emergence. This paper develops the argument that dealing with requirements for complex situations requires a change in paradigm. The elicitation of requirements for simple and technically driven systems is appropriately accomplished by proven methods. In contrast, the elicitation of requirements in complex situations (e.g., integrated multiple critical infrastructures, system-of-systems, etc.) requires more holistic thinking and can be enhanced by grounding in systems theory.",
"title": ""
},
{
"docid": "2f471c24ccb38e70627eba6383c003e0",
"text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.",
"title": ""
},
{
"docid": "12f717b4973a5290233d6f03ba05626b",
"text": "We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data.",
"title": ""
},
{
"docid": "0e796ac2c27a1811eaafb8e3a65c7d59",
"text": "When dealing with large graphs, such as those that arise in the context of online social networks, a subset of nodes may be labeled. These labels can indicate demographic values, interest, beliefs or other characteristics of the nodes (users). A core problem is to use this information to extend the labeling so that all nodes are assigned a label (or labels). In this chapter, we survey classification techniques that have been proposed for this problem. We consider two broad categories: methods based on iterative application of traditional classifiers using graph information as features, and methods which propagate the existing labels via random walks. We adopt a common perspective on these methods to highlight the similarities between different approaches within and across the two categories. We also describe some extensions and related directions to the central problem of node classification.",
"title": ""
},
{
"docid": "10202f2c14808988ca74b7efe5079949",
"text": "Multiagent systems are rapidly finding applications in a variety of domains, including robotics, distributed control, telecommunications, and economics. The complexity of many tasks arising in these domains makes them difficult to solve with preprogrammed agent behaviors. The agents must, instead, discover a solution on their own, using learning. A significant part of the research on multiagent learning concerns reinforcement learning techniques. This paper provides a comprehensive survey of multiagent reinforcement learning (MARL). A central issue in the field is the formal statement of the multiagent learning goal. Different viewpoints on this issue have led to the proposal of many different goals, among which two focal points can be distinguished: stability of the agents' learning dynamics, and adaptation to the changing behavior of the other agents. The MARL algorithms described in the literature aim---either explicitly or implicitly---at one of these two goals or at a combination of both, in a fully cooperative, fully competitive, or more general setting. A representative selection of these algorithms is discussed in detail in this paper, together with the specific issues that arise in each category. Additionally, the benefits and challenges of MARL are described along with some of the problem domains where the MARL techniques have been applied. Finally, an outlook for the field is provided.",
"title": ""
},
{
"docid": "ab3d4c0562847c6a4ebfe4ab398d8e74",
"text": "Self-compassion refers to a kind and nurturing attitude toward oneself during situations that threaten one’s adequacy, while recognizing that being imperfect is part of being human. Although growing evidence indicates that selfcompassion is related to a wide range of desirable psychological outcomes, little research has explored self-compassion in older adults. The present study investigated the relationships between self-compassion and theoretically based indicators of psychological adjustment, as well as the moderating effect of self-compassion on self-rated health. A sample of 121 older adults recruited from a community library and a senior day center completed self-report measures of self-compassion, self-esteem, psychological well-being, anxiety, and depression. Results indicated that self-compassion is positively correlated with age, self-compassion is positively and uniquely related to psychological well-being, and self-compassion moderates the association between self-rated health and depression. These results suggest that interventions designed to increase self-compassion in older adults may be a fruitful direction for future applied research.",
"title": ""
},
{
"docid": "8af61009253af61dd6d4daf0ad4be30c",
"text": "Forensic anthropologists often rely on the state of decomposition to estimate the postmortem interval (PMI) in a human remains case. The state of decomposition can provide much information about the PMI, especially when decomposition is treated as a semi-continuous variable and used in conjunction with accumulated-degree-days (ADD). This preliminary study demonstrates a supplemental method of determining the PMI based on scoring decomposition using a point-based system and taking into account temperatures in which the remains were exposed. This project was designed to examine the ways that forensic anthropologists could improve their PMI estimates based on decomposition by using a more quantitative approach. A total of 68 human remains cases with a known date of death were scored for decomposition and a regression equation was calculated to predict ADD from decomposition score. ADD accounts for approximately 80% of the variation in decomposition. This study indicates that decomposition is best modeled as dependent on accumulated temperature, not just time.",
"title": ""
},
{
"docid": "083cb6546aecdc12c2a1e36a9b8d9b67",
"text": "Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT’14 English-French and WMT’16 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semisupervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.1",
"title": ""
},
{
"docid": "c8cd0c0ebd38b3e287d6e6eed965db6b",
"text": "Goalball, one of the official Paralympic events, is popular with visually impaired people all over the world. The purpose of goalball is to throw the specialized ball, with bells inside it, to the goal line of the opponents as many times as possible while defenders try to block the thrown ball with their bodies. Since goalball players cannot rely on visual information, they need to grasp the game situation using their auditory sense. However, it is hard, especially for beginners, to perceive the direction and distance of the thrown ball. In addition, they generally tend to be afraid of the approaching ball because, without visual information, they could be hit by a high-speed ball. In this paper, our goal is to develop an application called GoalBaural (Goalball + aural) that enables goalball players to improve the recognizability of the direction and distance of a thrown ball without going onto the court and playing goalball. The evaluation result indicated that our application would be efficient in improving the speed and the accuracy of locating the balls.",
"title": ""
},
{
"docid": "a88d96ab8202d7328b97f68902d0a41b",
"text": "How the motor-related cortical areas modulate the activity of the output nuclei of the basal ganglia is an important issue for understanding the mechanisms of motor control by the basal ganglia. The cortico-subthalamo-pallidal 'hyperdirect' pathway conveys powerful excitatory effects from the motor-related cortical areas to the globus pallidus, bypassing the striatum, with shorter conduction time than effects conveyed through the striatum. We emphasize the functional significance of the 'hyperdirect' pathway and propose a dynamic 'center-surround model' of basal ganglia function in the control of voluntary limb movements. When a voluntary movement is about to be initiated by cortical mechanisms, a corollary signal conveyed through the cortico-subthalamo-pallidal 'hyperdirect' pathway first inhibits large areas of the thalamus and cerebral cortex that are related to both the selected motor program and other competing programs. Then, another corollary signal through the cortico-striato-pallidal 'direct' pathway disinhibits their targets and releases only the selected motor program. Finally, the third corollary signal possibly through the cortico-striato-external pallido-subthalamo-internal pallidal 'indirect' pathway inhibits their targets extensively. Through this sequential information processing, only the selected motor program is initiated, executed and terminated at the selected timing, whereas other competing programs are canceled.",
"title": ""
},
{
"docid": "e5673ab37cb9095946d96399aa340bcc",
"text": "Water reclamation and reuse provides a unique and viable opportunity to augment traditional water supplies. As a multi-disciplined and important element of water resources development and management, water reuse can help to close the loop between water supply and wastewater disposal. Effective water reuse requires integration of water and reclaimed water supply functions. The successful development of this dependable water resource depends upon close examination and synthesis of elements from infrastructure and facilities planning, wastewater treatment plant siting, treatment process reliability, economic and financial analyses, and water utility management. In this paper, fundamental concepts of water reuse are discussed including definitions, historical developments, the role of water recycling in the hydrologic cycle, categories of water reuse, water quality criteria and regulatory requirements, and technological innovations for the safe use of reclaimed water. The paper emphasizes the integration of this alternative water supply into water resources planning, and the emergence of modern water reclamation and reuse practices from wastewater to reclaimed water to repurified water.",
"title": ""
},
{
"docid": "4de2536d5c56d6ade1b3eff97ac8037a",
"text": "Received November 25, 1992; revised manuscript received April 14, 1993; accepted May 11, 1993 We develop a maximum-likelihood (ML) algorithm for estimation and correction (autofocus) of phase errors induced in synthetic-aperture-radar (SAR) imagery. Here, M pulse vectors in the range-compressed domain are used as input for simultaneously estimating M 1 phase values across the aperture. The solution involves an eigenvector of the sample covariance matrix of the range-compressed data. The estimator is then used within the basic structure of the phase gradient autofocus (PGA) algorithm, replacing the original phase-estimation kernel. We show that, in practice, the new algorithm provides excellent restorations to defocused SAR imagery, typically in only one or two iterations. The performance of the new phase estimator is demonstrated essentially to achieve the Cramer-Rao lower bound on estimation-error variance for all but small values of target-toclutter ratio. We also show that for the case in which M is equal to 2, the ML estimator is similar to that of the original PGA method but achieves better results in practice, owing to a bias inherent in the original PGA phaseestimation kernel. Finally, we discuss the relationship of these algorithms to the shear-averaging and spatialcorrelation methods, two other phase-correction techniques that utilize the same phase-estimation kernel but that produce substantially poorer performance because they do not employ several fundamental signal-processing steps that are critical to the algorithms of the PGA class.",
"title": ""
},
{
"docid": "435fcf5dab986fd87db6fc24fef3cc1a",
"text": "Web applications make life more convenient through on the activities. Many web applications have several kind of user input (e.g. personal information, a user's comment of commercial goods, etc.) for the activities. However, there are various vulnerabilities in input functions of web applications. It is possible to try malicious actions using free accessibility of the web applications. The attacks by exploitation of these input vulnerabilities enable to be performed by injecting malicious web code; it enables one to perform various illegal actions, such as SQL Injection Attacks (SQLIAs) and Cross Site Scripting (XSS). These actions come down to theft, replacing personal information, or phishing. Many solutions have devised for the malicious web code, such as AMNESIA [1] and SQL Check [2], etc. The methods use parser for the code, and limited to fixed and very small patterns, and are difficult to adapt to variations. Machine learning method can give leverage to cover far broader range of malicious web code and is easy to adapt to variations and changes. Therefore, we suggests adaptable classification of malicious web code by machine learning approach such as Support Vector Machine (SVM)[3], Naïve-Bayes[4], and k-Nearest Neighbor Algorithm[5] for detecting the exploitation user inputs.",
"title": ""
},
{
"docid": "102a9eb7ba9f65a52c6983d74120430e",
"text": "A key aim of social psychology is to understand the psychological processes through which independent variables affect dependent variables in the social domain. This objective has given rise to statistical methods for mediation analysis. In mediation analysis, the significance of the relationship between the independent and dependent variables has been integral in theory testing, being used as a basis to determine (1) whether to proceed with analyses of mediation and (2) whether one or several proposed mediator(s) fully or partially accounts for an effect. Synthesizing past research and offering new arguments, we suggest that the collective evidence raises considerable concern that the focus on the significance between the independent and dependent variables, both before and after mediation tests, is unjustified and can impair theory development and testing. To expand theory involving social psychological processes, we argue that attention in mediation analysis should be shifted towards assessing the magnitude and significance of indirect effects. Understanding the psychological processes by which independent variables affect dependent variables in the social domain has long been of interest to social psychologists. Although moderation approaches can test competing psychological mechanisms (e.g., Petty, 2006; Spencer, Zanna, & Fong, 2005), mediation is typically the standard for testing theories regarding process (e.g., Baron & Kenny, 1986; James & Brett, 1984; Judd & Kenny, 1981; MacKinnon, 2008; MacKinnon, Lockwood, Hoffman, West, & Sheets, 2002; Muller, Judd, & Yzerbyt, 2005; Preacher & Hayes, 2004; Preacher, Rucker, & Hayes, 2007; Shrout & Bolger, 2002). For example, dual process models of persuasion (e.g., Petty & Cacioppo, 1986) often distinguish among competing accounts by measuring the postulated underlying process (e.g., thought favorability, thought confidence) and examining their viability as mediators (Tormala, Briñol, & Petty, 2007). Thus, deciding on appropriate requirements for mediation is vital to theory development. Supporting the high status of mediation analysis in our field, MacKinnon, Fairchild, and Fritz (2007) report that research in social psychology accounts for 34% of all mediation tests in psychology more generally. In our own analysis of journal articles published from 2005 to 2009, we found that approximately 59% of articles in the Journal of Personality and Social Psychology (JPSP) and 65% of articles in Personality and Social Psychology Bulletin (PSPB) included at least one mediation test. Consistent with the observations of MacKinnon et al., we found that the bulk of these analyses continue to follow the causal steps approach outlined by Baron and Kenny (1986). Social and Personality Psychology Compass 5/6 (2011): 359–371, 10.1111/j.1751-9004.2011.00355.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd The current article examines the viability of the causal steps approach in which the significance of the relationship between an independent variable (X) and a dependent variable (Y) are tested both before and after controlling for a mediator (M) in order to examine the validity of a theory specifying mediation. Traditionally, the X fi Y relationship is tested prior to mediation to determine whether there is an effect to mediate, and it is also tested after introducing a potential mediator to determine whether that mediator fully or partially accounts for the effect. At first glance, the requirement of a significant X fi Y association prior to examining mediation seems reasonable. If there is no significant X fi Y relationship, how can there be any mediation of it? Furthermore, the requirement that X fi Y become nonsignificant when controlling for the mediator seems sensible in order to claim ‘full mediation’. What is the point of hypothesizing or testing for additional mediators if the inclusion of one mediator renders the initial relationship indistinguishable from zero? Despite the intuitive appeal of these requirements, the present article raises serious concerns about their use.",
"title": ""
},
{
"docid": "9420760d6945440048cee3566ce96699",
"text": "In this work, we develop a computer vision based fall prevention system for hospital ward application. To prevent potential falls, once the event of patient get up from the bed is automatically detected, nursing staffs are alarmed immediately for assistance. For the detection task, we use a RGBD sensor (Microsoft Kinect). The geometric prior knowledge is exploited by identifying a set of task-specific feature channels, e.g., regions of interest. Extensive motion and shape features from both color and depth image sequences are extracted. Features from multiple modalities and channels are fused via a multiple kernel learning framework for training the event detector. Experimental results demonstrate the high accuracy and efficiency achieved by the proposed system.",
"title": ""
},
{
"docid": "342d074c84d55b60a617d31026fe23e1",
"text": "Fractured bones heal by a cascade of cellular events in which mesenchymal cells respond to unknown regulators by proliferating, differentiating, and synthesizing extracellular matrix. Current concepts suggest that growth factors may regulate different steps in this cascade (10). Recent studies suggest regulatory roles for PDGF, aFGF, bFGF, and TGF-beta in the initiation and the development of the fracture callus. Fracture healing begins immediately following injury, when growth factors, including TGF-beta 1 and PDGF, are released into the fracture hematoma by platelets and inflammatory cells. TGF-beta 1 and FGF are synthesized by osteoblasts and chondrocytes throughout the healing process. TGF-beta 1 and PDGF appear to have an influence on the initiation of fracture repair and the formation of cartilage and intramembranous bone in the initiation of callus formation. Acidic FGF is synthesized by chondrocytes, chondrocyte precursors, and macrophages. It appears to stimulate the proliferation of immature chondrocytes or precursors, and indirectly regulates chondrocyte maturation and the expression of the cartilage matrix. Presumably, growth factors in the callus at later times regulate additional steps in repair of the bone after fracture. These studies suggest that growth factors are central regulators of cellular proliferation, differentiation, and extracellular matrix synthesis during fracture repair. Abnormal growth factor expression has been implicated as causing impaired or abnormal healing in other tissues, suggesting that altered growth factor expression also may be responsible for abnormal or delayed fracture repair. As a complete understanding of fracture-healing regulation evolves, we expect new insights into the etiology of abnormal or delayed fracture healing, and possibly new therapies for these difficult clinical problems.",
"title": ""
},
{
"docid": "55158927c639ed62b53904b97a0f7a97",
"text": "Speech comprehension and production are governed by control processes. We explore their nature and dynamics in bilingual speakers with a focus on speech production. Prior research indicates that individuals increase cognitive control in order to achieve a desired goal. In the adaptive control hypothesis we propose a stronger hypothesis: Language control processes themselves adapt to the recurrent demands placed on them by the interactional context. Adapting a control process means changing a parameter or parameters about the way it works (its neural capacity or efficiency) or the way it works in concert, or in cascade, with other control processes (e.g., its connectedness). We distinguish eight control processes (goal maintenance, conflict monitoring, interference suppression, salient cue detection, selective response inhibition, task disengagement, task engagement, opportunistic planning). We consider the demands on these processes imposed by three interactional contexts (single language, dual language, and dense code-switching). We predict adaptive changes in the neural regions and circuits associated with specific control processes. A dual-language context, for example, is predicted to lead to the adaptation of a circuit mediating a cascade of control processes that circumvents a control dilemma. Effective test of the adaptive control hypothesis requires behavioural and neuroimaging work that assesses language control in a range of tasks within the same individual.",
"title": ""
},
{
"docid": "05b716c1e84b842710b07e06731beed7",
"text": "_____________________________________________________________________________ Corporate boards are comprised of individual directors but make decisions as a group. The quality of their decisions affects firm value. In this study, we focus on one aspect of board structure–– director overlap––the overlap in service for a given pair of directors in a given firm, averaged across all director pairs in the firm. Greater overlap among directors can lead to negative synergies through groupthink, a mode of thinking by highly cohesive groups where the desire for consensus potentially overrides critical evaluation of all possible alternatives. Alternatively, greater overlap can lead to positive synergies through a reduction in coordination and communication costs, resulting in more effective teamwork. We hypothesize that: (i) director overlap will have a more negative effect on firm value for dynamic firms, which value critical thinking and hence stand to lose more from groupthink; and (ii) director overlap will have a more positive effect on firm value in complex firms, which have higher coordination costs and hence benefit from better teamwork. We find results consistent with our predictions. Our results have implications for the term limits of directors because term limits impose a ceiling on director overlap. ______________________________________________________________________________ JEL Classifications: G32; G34; K22",
"title": ""
},
{
"docid": "97a7c48145d682a9ed45109d83c82a73",
"text": "We introduce a large dataset of narrative texts and questions about these texts, intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge. Our dataset complements similar datasets in that we focus on stories about everyday activities, such as going to the movies or working in the garden, and that the questions require commonsense knowledge, or more specifically, script knowledge, to be answered. We show that our mode of data collection via crowdsourcing results in a substantial amount of such inference questions. The dataset forms the basis of a shared task on commonsense and script knowledge organized at SemEval 2018 and provides challenging test cases for the broader natural language understanding community.",
"title": ""
}
] |
scidocsrr
|
80fdd5b3d91cfc2c6e561cdf529eabb5
|
Artificial Roughness Encoding with a Bio-inspired MEMS- based Tactile Sensor Array
|
[
{
"docid": "f3ee129af2a833f8775c5366c188d71c",
"text": "Strong motivation for developing new prosthetic hand devices is provided by the fact that low functionality and controllability—in addition to poor cosmetic appearance—are the most important reasons why amputees do not regularly use their prosthetic hands. This paper presents the design of the CyberHand, a cybernetic anthropomorphic hand intended to provide amputees with functional hand replacement. Its design was bio-inspired in terms of its modular architecture, its physical appearance, kinematics, sensorization, and actuation, and its multilevel control system. Its underactuated mechanisms allow separate control of each digit as well as thumb–finger opposition and, accordingly, can generate a multitude of grasps. Its sensory system was designed to provide proprioceptive information as well as to emulate fundamental functional properties of human tactile mechanoreceptors of specific importance for grasp-and-hold tasks. The CyberHand control system presumes just a few efferent and afferent channels and was divided in two main layers: a high-level control that interprets the user’s intention (grasp selection and required force level) and can provide pertinent sensory feedback and a low-level control responsible for actuating specific grasps and applying the desired total force by taking advantage of the intelligent mechanics. The grasps made available by the high-level controller include those fundamental for activities of daily living: cylindrical, spherical, tridigital (tripod), and lateral grasps. The modular and flexible design of the CyberHand makes it suitable for incremental development of sensorization, interfacing, and control strategies and, as such, it will be a useful tool not only for clinical research but also for addressing neuroscientific hypotheses regarding sensorimotor control.",
"title": ""
}
] |
[
{
"docid": "afffadc35ac735d11e1a415c93d1c39f",
"text": "We examine self-control problems — modeled as time-inconsistent, presentbiased preferences—in a model where a person must do an activity exactly once. We emphasize two distinctions: Do activities involve immediate costs or immediate rewards, and are people sophisticated or naive about future self-control problems? Naive people procrastinate immediate-cost activities and preproperate—do too soon—immediate-reward activities. Sophistication mitigates procrastination, but exacerbates preproperation. Moreover, with immediate costs, a small present bias can severely harm only naive people, whereas with immediate rewards it can severely harm only sophisticated people. Lessons for savings, addiction, and elsewhere are discussed. (JEL A12, B49, C70, D11, D60, D74, D91, E21)",
"title": ""
},
{
"docid": "ba0fab446ba760a4cb18405a05cf3979",
"text": "Please c Disaster Summary. — This study aims at understanding the role of education in promoting disaster preparedness. Strengthening resilience to climate-related hazards is an urgent target of Goal 13 of the Sustainable Development Goals. Preparing for a disaster such as stockpiling of emergency supplies or having a family evacuation plan can substantially minimize loss and damages from natural hazards. However, the levels of household disaster preparedness are often low even in disaster-prone areas. Focusing on determinants of personal disaster preparedness, this paper investigates: (1) pathways through which education enhances preparedness; and (2) the interplay between education and experience in shaping preparedness actions. Data analysis is based on face-to-face surveys of adults aged 15 years in Thailand (N = 1,310) and the Philippines (N = 889, female only). Controlling for socio-demographic and contextual characteristics, we find that formal education raises the propensity to prepare against disasters. Using the KHB method to further decompose the education effects, we find that the effect of education on disaster preparedness is mainly mediated through social capital and disaster risk perception in Thailand whereas there is no evidence that education is mediated through observable channels in the Philippines. This suggests that the underlying mechanisms explaining the education effects are highly context-specific. Controlling for the interplay between education and disaster experience, we show that education raises disaster preparedness only for those households that have not been affected by a disaster in the past. Education improves abstract reasoning and anticipation skills such that the better educated undertake preventive measures without needing to first experience the harmful event and then learn later. In line with recent efforts of various UN agencies in promoting education for sustainable development, this study provides a solid empirical evidence showing positive externalities of education in disaster risk reduction. 2017TheAuthors.PublishedbyElsevierLtd.This is an open access article under theCCBY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "07a42e7b4c5bc8088e9ff9b57c46f5fb",
"text": "In this paper, the concept of divergent component of motion (DCM, also called “Capture Point”) is extended to 3-D. We introduce the “Enhanced Centroidal Moment Pivot point” (eCMP) and the “Virtual Repellent Point” (VRP), which allow for the encoding of both direction and magnitude of the external forces and the total force (i.e., external plus gravitational forces) acting on the robot. Based on eCMP, VRP, and DCM, we present methods for real-time planning and tracking control of DCM trajectories in 3-D. The basic DCM trajectory generator is extended to produce continuous leg force profiles and to facilitate the use of toe-off motion during double support. The robustness of the proposed control framework is thoroughly examined, and its capabilities are verified both in simulations and experiments.",
"title": ""
},
{
"docid": "4523c880e099da9bbade4870da04f0c4",
"text": "Despite the hype about blockchains and distributed ledgers, formal abstractions of these objects are scarce1. To face this issue, in this paper we provide a proper formulation of a distributed ledger object. In brief, we de ne a ledger object as a sequence of records, and we provide the operations and the properties that such an object should support. Implemen- tation of a ledger object on top of multiple (possibly geographically dispersed) computing devices gives rise to the distributed ledger object. In contrast to the centralized object, dis- tribution allows operations to be applied concurrently on the ledger, introducing challenges on the consistency of the ledger in each participant. We provide the de nitions of three well known consistency guarantees in terms of the operations supported by the ledger object: (1) atomic consistency (linearizability), (2) sequential consistency, and (3) eventual consistency. We then provide implementations of distributed ledgers on asynchronous message passing crash- prone systems using an Atomic Broadcast service, and show that they provide eventual, sequen- tial or atomic consistency semantics respectively. We conclude with a variation of the ledger the validated ledger which requires that each record in the ledger satis es a particular validation rule.",
"title": ""
},
{
"docid": "7e91815398915670fadba3c60e772d14",
"text": "Online reviews are valuable resources not only for consumers to make decisions before purchase, but also for providers to get feedbacks for their services or commodities. In Aspect Based Sentiment Analysis (ABSA), it is critical to identify aspect categories and extract aspect terms from the sentences of user-generated reviews. However, the two tasks are often treated independently, even though they are closely related. Intuitively, the learned knowledge of one task should inform the other learning task. In this paper, we propose a multi-task learning model based on neural networks to solve them together. We demonstrate the improved performance of our multi-task learning model over the models trained separately on three public dataset released by SemEval work-",
"title": ""
},
{
"docid": "30740e33cdb2c274dbd4423e8f56405e",
"text": "A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.",
"title": ""
},
{
"docid": "af4db4d9be3f652445a47e2985070287",
"text": "BACKGROUND\nSurgical Site Infections (SSIs) are infections of incision or deep tissue at operation sites. These infections prolong hospitalization, delay wound healing, and increase the overall cost and morbidity.\n\n\nOBJECTIVES\nThis study aimed to investigate anaerobic and aerobic bacteria prevalence in surgical site infections and determinate antibiotic susceptibility pattern in these isolates.\n\n\nMATERIALS AND METHODS\nOne hundred SSIs specimens were obtained by needle aspiration from purulent material in depth of infected site. These specimens were cultured and incubated in both aerobic and anaerobic condition. For detection of antibiotic susceptibility pattern in aerobic and anaerobic bacteria, we used disk diffusion, agar dilution, and E-test methods.\n\n\nRESULTS\nA total of 194 bacterial strains were isolated from 100 samples of surgical sites. Predominant aerobic and facultative anaerobic bacteria isolated from these specimens were the members of Enterobacteriaceae family (66, 34.03%) followed by Pseudomonas aeruginosa (26, 13.4%), Staphylococcus aureus (24, 12.37%), Acinetobacter spp. (18, 9.28%), Enterococcus spp. (16, 8.24%), coagulase negative Staphylococcus spp. (14, 7.22%) and nonhemolytic streptococci (2, 1.03%). Bacteroides fragilis (26, 13.4%), and Clostridium perfringens (2, 1.03%) were isolated as anaerobic bacteria. The most resistant bacteria among anaerobic isolates were B. fragilis. All Gram-positive isolates were susceptible to vancomycin and linezolid while most of Enterobacteriaceae showed sensitivity to imipenem.\n\n\nCONCLUSIONS\nMost SSIs specimens were polymicrobial and predominant anaerobic isolate was B. fragilis. Isolated aerobic and anaerobic strains showed high level of resistance to antibiotics.",
"title": ""
},
{
"docid": "044de981e34f0180accfb799063a7ec1",
"text": "This paper proposes a novel hybrid full-bridge three-level LLC resonant converter. It integrates the advantages of the hybrid full-bridge three-level converter and the LLC resonant converter. It can operate not only under three-level mode but also under two-level mode, so it is very suitable for wide input voltage range application, such as fuel cell power system. The input current ripple and output filter can also be reduced. Three-level leg switches just sustain only half of the input voltage. ZCS is achieved for the rectifier diodes, and the voltage stress across the rectifier diodes can be minimized to the output voltage. The main switches can realize ZVS from zero to full load. A 200-400 V input, 360 V/4 A output prototype converter is built in our lab to verify the operation principle of the proposed converter",
"title": ""
},
{
"docid": "427ebc0500e91e842873c4690cdacf79",
"text": "Bounding volume hierarchy (BVH) has been widely adopted as the acceleration structure in broad-phase collision detection. Previous state-of-the-art BVH-based collision detection approaches exploited the spatio-temporal coherence of simulations by maintaining a bounding volume test tree (BVTT) front. A major drawback of these algorithms is that large deformations in the scenes decrease culling efficiency and slow down collision queries. Moreover, for front-based methods, the inefficient caching on GPU caused by the arbitrary layout of BVH and BVTT front nodes becomes a critical performance issue. We present a fast and robust BVH-based collision detection scheme on GPU that addresses the above problems by ordering and restructuring BVHs and BVTT fronts. Our techniques are based on the use of histogram sort and an auxiliary structure BVTT front log, through which we analyze the dynamic status of BVTT front and BVH quality. Our approach efficiently handles interand intra-object collisions and performs especially well in simulations where there is considerable spatio-temporal coherence. The benchmark results demonstrate that our approach is significantly faster than the previous BVH-based method, and also outperforms other state-of-the-art spatial subdivision schemes in terms of speed. CCS Concepts •Computing methodologies → Collision detection; Physical simulation;",
"title": ""
},
{
"docid": "c447e34a5048c7fe2d731aaa77b87dd3",
"text": "Bullying, in both physical and cyber worlds, has been recognized as a serious health issue among adolescents. Given its significance, scholars are charged with identifying factors that influence bullying involvement in a timely fashion. However, previous social studies of bullying are handicapped by data scarcity. The standard psychological science approach to studying bullying is to conduct personal surveys in schools. The sample size is typically in the hundreds, and these surveys are often collected only once. On the other hand, the few computational studies narrowly restrict themselves to cyberbullying, which accounts for only a small fraction of all bullying episodes.",
"title": ""
},
{
"docid": "0eec3e2c266f6c8dd39b38320a4e70fa",
"text": "The development of Urdu Nastalique O Character Recognition (OCR) is a challenging task due to the cursive nature of Urdu, complexities of Nastalique writing style and layouts of Urdu document images. In this paper, the framework of Urdu Nastalique OCR is presented. The presented system supports the recognition of Urdu Nastalique document images having font size between 14 to 44. has 86.15% ligature recognition accuracy tested on 224 document images.",
"title": ""
},
{
"docid": "c2fc4e65c484486f5612f4006b6df102",
"text": "Although flat item category structure where categories are independent in a same level has been well studied to enhance recommendation performance, in many real applications, item category is often organized in hierarchies to reflect the inherent correlations among categories. In this paper, we propose a novel matrix factorization model by exploiting category hierarchy from the perspectives of users and items for effective recommendation. Specifically, a user (an item) can be influenced (characterized) by her preferred categories (the categories it belongs to) in the hierarchy. We incorporate how different categories in the hierarchy co-influence a user and an item. Empirical results show the superiority of our approach against other counterparts.",
"title": ""
},
{
"docid": "924eb275a1205dbf7907a58fc1cee5b6",
"text": "BACKGROUND\nNutrient status of B vitamins, particularly folate and vitamin B-12, may be related to cognitive ageing but epidemiological evidence remains inconclusive.\n\n\nOBJECTIVE\nThe aim of this study was to estimate the association of serum folate and vitamin B-12 concentrations with cognitive function in middle-aged and older adults from three Central and Eastern European populations.\n\n\nMETHODS\nMen and women aged 45-69 at baseline participating in the Health, Alcohol and Psychosocial factors in Eastern Europe (HAPIEE) study were recruited in Krakow (Poland), Kaunas (Lithuania) and six urban centres in the Czech Republic. Tests of immediate and delayed recall, verbal fluency and letter search were administered at baseline and repeated in 2006-2008. Serum concentrations of biomarkers at baseline were measured in a sub-sample of participants. Associations of vitamin quartiles with baseline (n=4166) and follow-up (n=2739) cognitive domain-specific z-scores were estimated using multiple linear regression.\n\n\nRESULTS\nAfter adjusting for confounders, folate was positively associated with letter search and vitamin B-12 with word recall in cross-sectional analyses. In prospective analyses, participants in the highest quartile of folate had higher verbal fluency (p<0.01) and immediate recall (p<0.05) scores compared to those in the bottom quartile. In addition, participants in the highest quartile of vitamin B-12 had significantly higher verbal fluency scores (β=0.12; 95% CI=0.02, 0.21).\n\n\nCONCLUSIONS\nFolate and vitamin B-12 were positively associated with performance in some but not all cognitive domains in older Central and Eastern Europeans. These findings do not lend unequivocal support to potential importance of folate and vitamin B-12 status for cognitive function in older age. Long-term longitudinal studies and randomised trials are required before drawing conclusions on the role of these vitamins in cognitive decline.",
"title": ""
},
{
"docid": "34a46b80f025cd8cd25243a777b4ff6a",
"text": "This research attempts to investigate the effects of blog marketing on brand attitude and purchase intention. The elements of blog marketing are identified as community identification, interpersonal trust, message exchange, and two-way communication. The relationships among variables are pictured on the fundamental research framework provided by this study. Data were collected via an online questionnaire and 727 useable samples were collected and analyzed utilizing AMOS 5.0. The empirical findings show that the blog marketing elements can impact on brand attitude positively except for the element of community identification. Further, the analysis result also verifies the moderating effects on the relationship between blog marketing elements and brand attitude.",
"title": ""
},
{
"docid": "f1cb1df8ad0b78f0f47b2cfcf2e9c5b6",
"text": "Quantitative performance analysis in sports has become mainstream in the last decade. The focus of the analyses is shifting towards more sport-speci ic metrics due to novel technologies. These systems measure the movements of the players and the events happening during trainings and games. This allows for a more detailed evaluation of professional athletes with implications on areas such as opponent scouting, planning of training sessions, or player scouting. Previousworks that analyze soccer-related logs focus on the game-relatedperformanceof theplayers and teams. Vast majority of these methodologies concentrate on descriptive statistics that capture some part of the players’ strategy. For example, in case of soccer, the average number of shots, goals, fouls, passes are derived both for the teams and for the players [1, 5]. Other works identify and analyze the outcome of the strategies that teams apply [18, 16, 13, 11, 9, 24, 14]. However, the physical performance and in particular the movements of players has not received detailed attention yet. It is challenging to get access to datasets related to the physical performance of soccer players. The teams consider such information highly con idential, especially if it covers in-game performance. Despite the fact that numerous teams deployed player tracking systems in their stadiums, datasets of this nature are not available for research or for public usage. It is nearly impossible to havequantitative information on the physical performance of all the teams of a competition. Hence, most of the analysis and evaluation of the players’ performance do not contain much information on the physical aspect of the game, creating a blindspot in performance analysis. We propose a novelmethod to solve this issue by derivingmovement characteristics of soccer players. We use event-based datasets from data provider companies covering 50+ soccer leagues allowing us to analyze the movement pro iles of potentially tens of thousands of players without any major investment. Our methodology does not require expensive, dedicated player tracking system deployed in the stadium. Instead, if the game is broadcasted, our methodology can be used. As a consequence, our technique does not require the consent of the involved teams yet it can provide insights on the physical performance of many players in different teams. The main contribution of our work is threefold:",
"title": ""
},
{
"docid": "5eb526843c41d2549862b60c17110b5b",
"text": "■ Abstract We explore the social dimension that enables adaptive ecosystem-based management. The review concentrates on experiences of adaptive governance of socialecological systems during periods of abrupt change (crisis) and investigates social sources of renewal and reorganization. Such governance connects individuals, organizations, agencies, and institutions at multiple organizational levels. Key persons provide leadership, trust, vision, meaning, and they help transform management organizations toward a learning environment. Adaptive governance systems often self-organize as social networks with teams and actor groups that draw on various knowledge systems and experiences for the development of a common understanding and policies. The emergence of “bridging organizations” seem to lower the costs of collaboration and conflict resolution, and enabling legislation and governmental policies can support self-organization while framing creativity for adaptive comanagement efforts. A resilient social-ecological system may make use of crisis as an opportunity to transform into a more desired state.",
"title": ""
},
{
"docid": "7fa8d82b55c5ae2879123380ef1a8505",
"text": "In the general context of Knowledge Discovery, speciic techniques , called Text Mining techniques, are necessary to extract information from unstructured textual data. The extracted information can then be used for the classiication of the content of large textual bases. In this paper, we present two examples of information that can be automatically extracted from text collections: probabilistic associations of keywords and prototypical document instances. The Natural Language Processing (NLP) tools necessary for such extractions are also presented.",
"title": ""
},
{
"docid": "3038334926608dbe4cdb091cf0e955eb",
"text": "Cloud computing has undergone rapid expansion throughout the last decade. Many companies and organizations have made the transition from tra ditional data centers to the cloud due to its flexibility and lower cost. However, traditional data centers are still being relied upon by those who are less certain about the security of cloud. This problem is highlighted by the fact that there only exist limited efforts on threat modeling for cloud data centers. In this paper, we conduct comprehensive threat modeling exercises based on two representative cloud infrastructures using several popular threat modeling methods, including attack surface, attack trees, attack graphs, and security metrics based on attack trees and attack graphs, respectively. Those threat modeling efforts provide cloud providers practical lessons and means toward better evaluating, understanding, and improving their cloud infrastructures. Our results may also imbed more con fidence in potential cloud tenants by providing them a clearer picture about po tential threats in cloud infrastructures and corresponding solutions.",
"title": ""
},
{
"docid": "9f04ac4067179aadf5e429492c7625e9",
"text": "We provide a model that links an asset’s market liquidity — i.e., the ease with which it is traded — and traders’ funding liquidity — i.e., the ease with which they can obtain funding. Traders provide market liquidity, and their ability to do so depends on their availability of funding. Conversely, traders’ funding, i.e., their capital and the margins they are charged, depend on the assets’ market liquidity. We show that, under certain conditions, margins are destabilizing and market liquidity and funding liquidity are mutually reinforcing, leading to liquidity spirals. The model explains the empirically documented features that market liquidity (i) can suddenly dry up, (ii) has commonality across securities, (iii) is related to volatility, (iv) is subject to “flight to quality”, and (v) comoves with the market, and it provides new testable predictions.",
"title": ""
},
{
"docid": "fa20d7bf8a6e99691a42dcd756ed1cc6",
"text": "IoT (Internet of Things) is acommunication network that connects physical or things to each other or with a group all together. The use is widely popular nowadays and its usage has expanded into interesting subjects. Especially, it is getting more popular to research in cross subjects such as mixing smart systems with computer sciences and engineering applications together. Object detection is one of these subjects. Realtime object detection is one of the foremost interesting subjects because of its compute costs. Gaps in methodology, unknown concepts and insufficiency in mathematical modeling makes it harder for designing these computing algorithms. Algortihms in these applications can be developed with in machine learning and/or numerical methods that are available in scientific literature. These operations are possible only if communication of objects within theirselves in physical space and awareness of the objects nearby. Artificial Neural Networks may help in these studies. In this study, yolo algorithm which is seen as a key element for real-time object detection in IoT is researched. It is realized and shown in results that optimization of computing and analyzation of system aside this research which takes Yolo algorithm as a foundation point [10]. As a result, it is seen that our model approach has an interesting potential and novelty.",
"title": ""
}
] |
scidocsrr
|
855d1d47b3a1ddb7069912ec769cd41b
|
Stock portfolio selection using learning-to-rank algorithms with news sentiment
|
[
{
"docid": "14838947ee3b95c24daba5a293067730",
"text": "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.",
"title": ""
}
] |
[
{
"docid": "bd5589d700173efdfb38a8cf9f8bbb3a",
"text": "Interior permanent-magnet (IPM) synchronous motors possess special features for adjustable-speed operation which distinguish them from other classes of ac machines. They are robust high powerdensity machines capable of operating at high motor and inverter efficiencies over wide speed ranges, including considerable ranges of constant-power operation. The magnet cost is minimized by the low magnet weight requirements of the IPM design. The impact of the buried-magnet configuration on the motor's electromagnetic characteristics is discussed. The rotor magnetic circuit saliency preferentially increases the quadrature-axis inductance and introduces a reluctance torque term into the IPM motor's torque equation. The electrical excitation requirements for the IPM synchronous motor are also discussed. The control of the sinusoidal phase currents in magnitude and phase angle with respect to the rotor orientation provides a means for achieving smooth responsive torque control. A basic feedforward algorithm for executing this type of current vector torque control is discussed, including the implications of current regulator saturation at high speeds. The key results are illustrated using a combination of simulation and prototype IPM drive measurements.",
"title": ""
},
{
"docid": "9869bc5dfc8f20b50608f0d68f7e49ba",
"text": "Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of “objectness”.",
"title": ""
},
{
"docid": "982253c9f0c05e50a070a0b2e762abd7",
"text": "In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs.",
"title": ""
},
{
"docid": "6fee1cce864d858af6e28959961f5c24",
"text": "Much of the organic light emitting diode (OLED) characterization published to date addresses the high current regime encountered in the operation of passively addressed displays. Higher efficiency and brightness can be obtained by driving with an active matrix, but the lower instantaneous pixel currents place the OLEDs in a completely different operating mode. Results at these low current levels are presented and their impact on active matrix display design is discussed.",
"title": ""
},
{
"docid": "b8466da90f2e75df2cc8453564ddb3e8",
"text": "Deep neural networks have recently shown impressive classification performance on a diverse set of visual tasks. When deployed in real-world (noise-prone) environments, it is equally important that these classifiers satisfy robustness guarantees: small perturbations applied to the samples should not yield significant losses to the performance of the predictor. The goal of this paper is to discuss the robustness of deep networks to a diverse set of perturbations that may affect the samples in practice, including adversarial perturbations, random noise, and geometric transformations. Our paper further discusses the recent works that build on the robustness analysis to provide geometric insights on the classifier’s decision surface, which help in developing a better understanding of deep nets. The overview finally presents recent solutions that attempt to increase the robustness of deep networks. We hope that this review paper will contribute shedding light on the open research challenges in the robustness of deep networks, and will stir interest in the analysis of their fundamental properties.",
"title": ""
},
{
"docid": "f1aee9423f768081f575eeb1334cf7e4",
"text": "The mobile robots often perform the dangerous missions such as planetary exploration, reconnaissance, anti-terrorism, rescue, and so on. So it is required that the robots should be able to move in the complex and unpredictable environment where the ground might be soft and hard, even and uneven. To access to such terrains, a novel robot (NEZA-I) with the self-adaptive mobile mechanism is proposed and developed. It consists of a control system unit and two symmetric transformable wheel-track (TWT) units. Each TWT unit is driven only by one servo motor, and can efficiently move over rough terrain by changing the locomotion mode and transforming the track configuration. It means that the mobile mechanism of NEZA-I has self-adaptability to the irregular environment. The paper proposes the design concept of NEZA-I, presents the structure and the drive system of NEZA-I, and describes the self-adaptive principle of the mobile mechanism to the rough terrains. The locomotion mode and posture of the mobile mechanism is analyzed by the means of simulation. Finally, basic experiments verify the mobility of NEZA-I.",
"title": ""
},
{
"docid": "a84d2de19a34b914e583c9f4379b68da",
"text": "English) xx Abstract(Arabic) xxiiArabic) xxii",
"title": ""
},
{
"docid": "fce58bfa94acf2b26a50f816353e6bf2",
"text": "The perspective directions in evaluating network security are simulating possible malefactor’s actions, building the representation of these actions as attack graphs (trees, nets), the subsequent checking of various properties of these graphs, and determining security metrics which can explain possible ways to increase security level. The paper suggests a new approach to security evaluation based on comprehensive simulation of malefactor’s actions, construction of attack graphs and computation of different security metrics. The approach is intended for using both at design and exploitation stages of computer networks. The implemented software system is described, and the examples of experiments for analysis of network security level are considered.",
"title": ""
},
{
"docid": "fd721261c29395867ce3966bdaeeaa7a",
"text": "Cutaneous saltation provides interesting possibilities for applications. An illusion of vibrotactile mediolateral movement was elicited to a left dorsal forearm to investigate emotional (i.e., pleasantness) and cognitive (i.e., continuity) experiences to vibrotactile stimulation. Twelve participants were presented with nine saltatory stimuli delivered to a linearly aligned row of three vibrotactile actuators separated by 70 mm in distance. The stimuli were composed of three temporal parameters of 12, 24 and 48 ms for both burst duration and inter-burst interval to form all nine possible uniform pairs. First, the stimuli were ranked by the participants using a special three-step procedure. Second, the participants rated the stimuli using two nine-point bipolar scales measuring the pleasantness and continuity of each stimulus, separately. The results showed especially the interval between two successive bursts was a significant factor for saltation. Moreover, the temporal parameters seemed to affect more the experienced continuity of the stimuli compared to pleasantness. These findings encourage us to continue to further study the saltation and the effect of different parameters for subjective experience.",
"title": ""
},
{
"docid": "85c800d32457fe9532f892c1703ba2d3",
"text": "In this paper design and implementation of a two stage fully differential, RC Miller compensated CMOS operational amplifier is presented. High gain enables this circuit to operate efficiently in a closed loop feedback system, whereas high bandwidth makes it suitable for high speed applications. The design is also able to address any fluctuation in supply or dc input voltages and stabilizes the operation by nullifying the effects due to perturbations. Implementation has been done in 0.18 um technology using libraries from tsmc with the help of tools from Mentor Graphics and Cadence. Op-amp designed here exhibits >95 dB DC differential gain, ~135 MHz unity gain bandwidth, phase margin of ~53, and ~132 V/uS slew rate for typical 1 pF differential capacitive load. The power dissipation for 3.3V supply voltage at 27C temperature under other nominal conditions is 2.29mW. Excellent output differential swing of 5.9V and good liner range of operation are some of the additional features of design.",
"title": ""
},
{
"docid": "2a1eb2fa37809bfce258476463af793c",
"text": "Parkinson’s disease (PD) is a chronic disease that develops over years and varies dramatically in its clinical manifestations. A preferred strategy to resolve this heterogeneity and thus enable better prognosis and targeted therapies is to segment out more homogeneous patient sub-populations. However, it is challenging to evaluate the clinical similarities among patients because of the longitudinality and temporality of their records. To address this issue, we propose a deep model that directly learns patient similarity from longitudinal and multi-modal patient records with an Recurrent Neural Network (RNN) architecture, which learns the similarity between two longitudinal patient record sequences through dynamically matching temporal patterns in patient sequences. Evaluations on real world patient records demonstrate the promising utility and efficacy of the proposed architecture in personalized predictions.",
"title": ""
},
{
"docid": "6baefa75db89210c4059d3c1dad46488",
"text": "In this paper, we propose a framework for low-energy digital signal processing (DSP) where the supply voltage is scaled beyond the critical voltage required to match the critical path delay to the throughput. This deliberate introduction of input-dependent errors leads to degradation in the algorithmic performance, which is compensated for via algorithmic noise-tolerance (ANT) schemes. The resulting setup that comprises of the DSP architecture operating at sub-critical voltage and the error control scheme is referred to as soft DSP. It is shown that technology scaling renders the proposed scheme more effective as the delay penalty suffered due to voltage scaling reduces due to short channel effects. The effectiveness of the proposed scheme is also enhanced when arithmetic units with a higher “delay-imbalance” are employed. A prediction based error-control scheme is proposed to enhance the performance of the filtering algorithm in presence of errors due to soft computations. For a frequency selective filter, it is shown that the proposed scheme provides 60% 81% reduction in energy dissipation for filter bandwidths up to 0.5~ (where 27r corresponds to the sampling frequency fs) over that achieved via conventional voltage scaling, with a maximum of 0.5dB degradation in the output signal-to-noise ratio (SN%). It is also shown that the proposed algorithmic noise-tolerance schemes can be used to improve the performance of DSP algorithms in presence of bit-error rates of upto 10-s due to deep submicron (DSM) noise.",
"title": ""
},
{
"docid": "d3444b0cee83da2a94f4782c79e0ce48",
"text": "Predicting student academic performance plays an important role in academics. Classifying st udents using conventional techniques cannot give the desired lev l of accuracy, while doing it with the use of soft computing techniques may prove to be beneficial. A student can be classi fied into one of the available categories based on his behavioral and qualitative features. The paper presents a Neural N etwork model fused with Fuzzy Logic to model academi c profile of students. The model mimics teacher’s ability to deal with imprecise information representing student’s characteristics in linguistic form. The suggested model is developed in MATLAB which takes into consideration various features of students under study. The input to the model consists of dat of students studying in any faculty. A combination of Fuzzy Logic ARTMAP Neural Network results into a model useful for management of educational institutes for improving the quality of education. A good prediction of student’s success ione way to be in the competition in education sys tem. The use of Soft Computing methodology is justified for its real-time applicability in education system.",
"title": ""
},
{
"docid": "f4b271c7ee8bfd9f8aa4d4cf84c4efd4",
"text": "Today, and possibly for a long time to come, the full driving task is too complex an activity to be fully formalized as a sensing-acting robotics system that can be explicitly solved through model-based and learning-based approaches in order to achieve full unconstrained vehicle autonomy. Localization, mapping, scene perception, vehicle control, trajectory optimization, and higher-level planning decisions associated with autonomous vehicle development remain full of open challenges. This is especially true for unconstrained, real-world operation where the margin of allowable error is extremely small and the number of edge-cases is extremely large. Until these problems are solved, human beings will remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving. The governing objectives of the MIT Autonomous Vehicle Technology (MIT-AVT) study are to (1) undertake large-scale real-world driving data collection that includes high-definition video to fuel the development of deep learning based internal and external perception systems, (2) gain a holistic understanding of how human beings interact with vehicle automation technology by integrating video data with vehicle state data, driver characteristics, mental models, and self-reported experiences with technology, and (3) identify how technology and other factors related to automation adoption and use can be improved in ways that save lives. In pursuing these objectives, we have instrumented 21 Tesla Model S and Model X vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6 vehicles for both long-term (over a year per driver) and medium term (one month per driver) naturalistic driving data collection. Furthermore, we are continually developing new methods for analysis of the massive-scale dataset collected from the instrumented vehicle fleet. The recorded data streams include IMU, GPS, CAN messages, and high-definition video streams of the driver face, the driver cabin, the forward roadway, and the instrument cluster (on select vehicles). The study is on-going and growing. To date, we have 99 participants, 11,846 days of participation, 405,807 miles, and 5.5 billion video frames. This paper presents the design of the study, the data collection hardware, the processing of the data, and the computer vision algorithms currently being used to extract actionable knowledge from the data. MIT Autonomous Vehicle",
"title": ""
},
{
"docid": "49ef68eabca989e07f420a3a88386c77",
"text": "Identifying the language used will typically be the first step in most natural language processing tasks. Among the wide variety of language identification methods discussed in the literature, the ones employing the Cavnar and Trenkle (1994) approach to text categorization based on character n-gram frequencies have been particularly successful. This paper presents the R extension package textcat for n-gram based text categorization which implements both the Cavnar and Trenkle approach as well as a reduced n-gram approach designed to remove redundancies of the original approach. A multi-lingual corpus obtained from the Wikipedia pages available on a selection of topics is used to illustrate the functionality of the package and the performance of the provided language identification methods.",
"title": ""
},
{
"docid": "17953a3e86d3a4396cbd8a911c477f07",
"text": "We introduce Deep Semantic Embedding (DSE), a supervised learning algorithm which computes semantic representation for text documents by respecting their similarity to a given query. Unlike other methods that use singlelayer learning machines, DSE maps word inputs into a lowdimensional semantic space with deep neural network, and achieves a highly nonlinear embedding to model the human perception of text semantics. Through discriminative finetuning of the deep neural network, DSE is able to encode the relative similarity between relevant/irrelevant document pairs in training data, and hence learn a reliable ranking score for a query-document pair. We present test results on datasets including scientific publications and user-generated knowledge base.",
"title": ""
},
{
"docid": "9897f5e64b4a5d6d80fadb96cb612515",
"text": "Deep convolutional neural networks (CNNs) are rapidly becoming the dominant approach to computer vision and a major component of many other pervasive machine learning tasks, such as speech recognition, natural language processing, and fraud detection. As a result, accelerators for efficiently evaluating CNNs are rapidly growing in popularity. The conventional approaches to designing such CNN accelerators is to focus on creating accelerators to iteratively process the CNN layers. However, by processing each layer to completion, the accelerator designs must use off-chip memory to store intermediate data between layers, because the intermediate data are too large to fit on chip. In this work, we observe that a previously unexplored dimension exists in the design space of CNN accelerators that focuses on the dataflow across convolutional layers. We find that we are able to fuse the processing of multiple CNN layers by modifying the order in which the input data are brought on chip, enabling caching of intermediate data between the evaluation of adjacent CNN layers. We demonstrate the effectiveness of our approach by constructing a fused-layer CNN accelerator for the first five convolutional layers of the VGGNet-E network and comparing it to the state-of-the-art accelerator implemented on a Xilinx Virtex-7 FPGA. We find that, by using 362KB of on-chip storage, our fused-layer accelerator minimizes off-chip feature map data transfer, reducing the total transfer by 95%, from 77MB down to 3.6MB per image.",
"title": ""
},
{
"docid": "7d33ba30fd30dce2cd4a3f5558a8c0ba",
"text": "It has long been conjectured that hypothesis spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical architectures than with shallow ones. Despite the vast empirical evidence, formal arguments to date are limited and do not capture the kind of networks used in practice. Using tensor factorization, we derive a universal hypothesis space implemented by an arithmetic circuit over functions applied to local data structures (e.g. image patches). The resulting networks first pass the input through a representation layer, and then proceed with a sequence of layers comprising sum followed by product-pooling, where sum corresponds to the widely used convolution operator. The hierarchical structure of networks is born from factorizations of tensors based on the linear weights of the arithmetic circuits. We show that a shallow network corresponds to a rank-1 decomposition, whereas a deep network corresponds to a Hierarchical Tucker (HT) decomposition. Log-space computation for numerical stability transforms the networks into SimNets.",
"title": ""
},
{
"docid": "5cc374d64b9f62de9c1142770bb6e0e7",
"text": "The demand for inexpensive and ubiquitous accurate motion-detection sensors for road safety, smart homes and robotics justifies the interest in single-chip mm-Wave radars: a high carrier frequency allows for a high angular resolution in a compact multi-antenna system and a wide bandwidth allows fora high depth resolution. With the objective of single-chip radar systems, CMOS is the natural candidate to replace SiGe as a leading technology [1-6].",
"title": ""
},
{
"docid": "c8bbc713aecbc6682d21268ee58ca258",
"text": "Traditional approaches to knowledge base completion have been based on symbolic representations. Lowdimensional vector embedding models proposed recently for this task are attractive since they generalize to possibly unlimited sets of relations. A significant drawback of previous embedding models for KB completion is that they merely support reasoning on individual relations (e.g., bornIn(X,Y )⇒ nationality(X,Y )). In this work, we develop models for KB completion that support chains of reasoning on paths of any length using compositional vector space models. We construct compositional vector representations for the paths in the KB graph from the semantic vector representations of the binary relations in that path and perform inference directly in the vector space. Unlike previous methods, our approach can generalize to paths that are unseen in training and, in a zero-shot setting, predict target relations without supervised training data for that relation.",
"title": ""
}
] |
scidocsrr
|
b299604767a625ea5384e321d2bb238d
|
Generalized Thompson sampling for sequential decision-making and causal inference
|
[
{
"docid": "3734fd47cf4e4e5c00f660cbb32863f0",
"text": "We describe a new Bayesian click-through rate (CTR) prediction algorithm used for Sponsored Search in Microsoft’s Bing search engine. The algorithm is based on a probit regression model that maps discrete or real-valued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored search where the predictions made by the algorithm decide about future training sample composition. Finally, we show experimental results from the production system and compare to a calibrated Naïve Bayes algorithm.",
"title": ""
}
] |
[
{
"docid": "c39b143861d1e0c371ec1684bb29f4cc",
"text": "Data races are a particularly unpleasant kind of threading bugs. They are hard to find and reproduce -- you may not observe a bug during the entire testing cycle and will only see it in production as rare unexplainable failures. This paper presents ThreadSanitizer -- a dynamic detector of data races. We describe the hybrid algorithm (based on happens-before and locksets) used in the detector. We introduce what we call dynamic annotations -- a sort of race detection API that allows a user to inform the detector about any tricky synchronization in the user program. Various practical aspects of using ThreadSanitizer for testing multithreaded C++ code at Google are also discussed.",
"title": ""
},
{
"docid": "60922247ab6ec494528d3a03c0909231",
"text": "This paper proposes a new \"zone controlled induction heating\" (ZCIH) system. The ZCIH system consists of two or more sets of a high-frequency inverter and a split work coil, which adjusts the coil current amplitude in each zone independently. The ZCIH system has capability of controlling the exothermic distribution on the work piece to avoid the strain caused by a thermal expansion. As a result, the ZCIH system enables a rapid heating performance as well as an temperature uniformity. This paper proposes current phase control making the coil current in phase with each other, to adjust the coil current amplitude even when a mutual inductance exists between the coils. This paper presents operating principle, theoretical analysis, and experimental results obtained from a laboratory setup and a six-zone prototype for a semiconductor processing.",
"title": ""
},
{
"docid": "c1f803e02ea7d6ef3bf6644e3aa17862",
"text": "Recurrent neural networks are prime candidates for learning evolutions in multi-dimensional time series data. The performance of such a network is judged by the loss function, which is aggregated into a scalar value that decreases during training. Observing only this number hides the variation that occurs within the typically large training and testing data sets. Understanding these variations is of highest importance to adjust network hyperparameters, such as the number of neurons, number of layers or to adjust the training set to include more representative examples. In this paper, we design a comprehensive and interactive system that allows users to study the output of recurrent neural networks on both the complete training data and testing data. We follow a coarse-to-fine strategy, providing overviews of annual, monthly and daily patterns in the time series and directly support a comparison of different hyperparameter settings. We applied our method to a recurrent convolutional neural network that was trained and tested on 25 years of climate data to forecast meteorological attributes, such as temperature, pressure and wind velocity. We further visualize the quality of the forecasting models, when applied to various locations on Earth and we examine the combination of several forecasting models. This is the authors preprint. The definitive version is available at http://diglib.eg.org/ and http://onlinelibrary.wiley.com/.",
"title": ""
},
{
"docid": "e141b36a3e257c4b8155cdf0682a0143",
"text": "Major depressive disorder is a common mental disorder that affects almost 7% of the adult U.S. population. The 2017 Audio/Visual Emotion Challenge (AVEC) asks participants to build a model to predict depression levels based on the audio, video, and text of an interview ranging between 7-33 minutes. Since averaging features over the entire interview will lose most temporal information, how to discover, capture, and preserve useful temporal details for such a long interview are significant challenges. Therefore, we propose a novel topic modeling based approach to perform context-aware analysis of the recording. Our experiments show that the proposed approach outperforms context-unaware methods and the challenge baselines for all metrics.",
"title": ""
},
{
"docid": "d79b440e5417fae517286206394e8685",
"text": "When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, etc. In this paper, we present a different solution that first detects and then removes aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing vs. non-aliasing regions and aliasing removal. Experiments on both synthetic scene and real light field camera array data sets demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.",
"title": ""
},
{
"docid": "67978cd2f94cabb45c1ea2c571cef4de",
"text": "Studies identifying oil shocks using structural vector autoregressions (VARs) reach different conclusions on the relative importance of supply and demand factors in explaining oil market fluctuations. This disagreement is due to different assumptions on the oil supply and demand elasticities that determine the identification of the oil shocks. We provide new estimates of oil-market elasticities by combining a narrative analysis of episodes of large drops in oil production with country-level instrumental variable regressions. When the estimated elasticities are embedded into a structural VAR, supply and demand shocks play an equally important role in explaining oil prices and oil quantities. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "3ee3cf039b1bc03d6b6e504ae87fc62f",
"text": "Objective: This paper tackles the problem of transfer learning in the context of electroencephalogram (EEG)-based brain–computer interface (BCI) classification. In particular, the problems of cross-session and cross-subject classification are considered. These problems concern the ability to use data from previous sessions or from a database of past users to calibrate and initialize the classifier, allowing a calibration-less BCI mode of operation. Methods: Data are represented using spatial covariance matrices of the EEG signals, exploiting the recent successful techniques based on the Riemannian geometry of the manifold of symmetric positive definite (SPD) matrices. Cross-session and cross-subject classification can be difficult, due to the many changes intervening between sessions and between subjects, including physiological, environmental, as well as instrumental changes. Here, we propose to affine transform the covariance matrices of every session/subject in order to center them with respect to a reference covariance matrix, making data from different sessions/subjects comparable. Then, classification is performed both using a standard minimum distance to mean classifier, and through a probabilistic classifier recently developed in the literature, based on a density function (mixture of Riemannian Gaussian distributions) defined on the SPD manifold. Results: The improvements in terms of classification performances achieved by introducing the affine transformation are documented with the analysis of two BCI datasets. Conclusion and significance: Hence, we make, through the affine transformation proposed, data from different sessions and subject comparable, providing a significant improvement in the BCI transfer learning problem.",
"title": ""
},
{
"docid": "12adb5e324d971d2c752f2193cec3126",
"text": "Despite recent excitement generated by the P2P paradigm and despite surprisingly fast deployment of some P2P applications, there are few quantitative evaluations of P2P systems behavior. Due to its open architecture and achieved scale, Gnutella is an interesting P2P architecture case study. Gnutella, like most other P2P applications, builds at the application level a virtual network with its own routing mechanisms. The topology of this overlay network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We built a ‘crawler’ to extract the topology of Gnutella’s application level network, we analyze the topology graph and evaluate generated network traffic. We find that although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure. These findings lead us to propose changes to Gnutella protocol and implementations that bring significant performance and scalability improvements.",
"title": ""
},
{
"docid": "87b5c0021e513898693e575ca5479757",
"text": "We present a statistical mechanics model of deep feed forward neural networks (FFN). Our energy-based approach naturally explains several known results and heuristics, providing a solid theoretical framework and new instruments for a systematic development of FFN. We infer that FFN can be understood as performing three basic steps: encoding, representation validation and propagation. We obtain a set of natural activations – such as sigmoid, tanh and ReLu – together with a state-of-the-art one, recently obtained by Ramachandran et al. [1] using an extensive search algorithm. We term this activation ESP (Expected Signal Propagation), explain its probabilistic meaning, and study the eigenvalue spectrum of the associated Hessian on classification tasks. We find that ESP allows for faster training and more consistent performances over a wide range of network architectures.",
"title": ""
},
{
"docid": "b10447097f8d513795b4f4e08e1838d8",
"text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.",
"title": ""
},
{
"docid": "664b003cedbca63ebf775bd9f062b8f1",
"text": "Since 1900, soil organic matter (SOM) in farmlands worldwide has declined drastically as a result of carbon turnover and cropping systems. Over the past 17 years, research trials were established to evaluate the efficacy of different commercial humates products on potato production. Data from humic acid (HA) trials showed that different cropping systems responded differently to different products in relation to yield and quality. Important qualifying factors included: source; concentration; processing; chelating or complexing capacity of the humic acid products; functional groups (Carboxyl; Phenol; Hydroxyl; Ketone; Ester; Ether; Amine), rotation and soil quality factors; consistency of the product in enhancing yield and quality of potato crops; mineralization effect; and influence on fertilizer use efficiency. Properties of humic substances, major constituents of soil organic matter, include chelation, mineralization, buffer effect, clay mineral-organic interaction, and cation exchange. Humates increase phosphorus availability by complexing ions into stable compounds, allowing the phosphorus ion to remain exchangeable for plants’ uptake. Collectively, the consistent use of good quality products in our replicated research plots in different years resulted in a yield increase from 11.4% to the maximum of 22.3%. Over the past decade, there has been a major increase in the quality of research and development of organic and humic acid products by some well-established manufacturers. Our experimentations with these commercial products showed an increase in the yield and quality of crops.",
"title": ""
},
{
"docid": "03dc23b2556e21af9424500e267612bb",
"text": "File fragment classification is an important and difficult problem in digital forensics. Previous works in this area mainly relied on specific byte sequences in file headers and footers, or statistical analysis and machine learning algorithms on data from the middle of the file. This paper introduces a new approach to classify file fragment based on grayscale image. The proposed method treats a file fragment as a grayscale image, and uses image classification method to classify file fragment. Furthermore, two models based on file-unbiased and type-unbiased are proposed to verify the validity of the proposed method. Compared with previous works, the experimental results are promising. An average classification accuracy of 39.7% in file-unbiased model and 54.7% in type-unbiased model are achieved on 29 file types.",
"title": ""
},
{
"docid": "ddd09bc1c5b16e273bb9d1eaeae1a7e8",
"text": "In this paper, we study concurrent beamforming issue for achieving high capacity in indoor millimeter-wave (mmWave) networks. The general concurrent beamforming issue is first formulated as an optimization problem to maximize the sum rates of concurrent transmissions, considering the mutual interference. To reduce the complexity of beamforming and the total setup time, concurrent beamforming is decomposed into multiple single-link beamforming, and an iterative searching algorithm is proposed to quickly achieve the suboptimal transmission/reception beam sets. A codebook-based beamforming protocol at medium access control (MAC) layer is then introduced in a distributive manner to determine the beam sets. Both analytical and simulation results demonstrate that the proposed protocol can drastically reduce total setup time, increase system throughput, and improve energy efficiency.",
"title": ""
},
{
"docid": "2dda75184e2c9c5507c75f84443fff08",
"text": "Text classification can help users to effectively handle and exploit useful information hidden in large-scale documents. However, the sparsity of data and the semantic sensitivity to context often hinder the classification performance of short texts. In order to overcome the weakness, we propose a unified framework to expand short texts based on word embedding clustering and convolutional neural network (CNN). Empirically, the semantically related words are usually close to each other in embedding spaces. Thus, we first discover semantic cliques via fast clustering. Then, by using additive composition over word embeddings from context with variable window width, the representations of multi-scale semantic units1 in short texts are computed. In embedding spaces, the restricted nearest word embeddings (NWEs)2 of the semantic units are chosen to constitute expanded matrices, where the semantic cliques are used as supervision information. Finally, for a short text, the projected matrix 3 and expanded matrices are combined and fed into CNN in parallel. Experimental results on two open benchmarks validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "a8bd9e8470ad414c38f5616fb14d433d",
"text": "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Airoldi et al. (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model.",
"title": ""
},
{
"docid": "545a7a98c79d14ba83766aa26cff0291",
"text": "Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.",
"title": ""
},
{
"docid": "a15c94c0ec40cb8633d7174b82b70a16",
"text": "Koenigs, Young and colleagues [1] recently tested patients with emotion-related damage in the ventromedial prefrontal cortex (VMPFC) usingmoral dilemmas used in previous neuroimaging studies [2,3]. These patients made unusually utilitarian judgments (endorsing harmful actions that promote the greater good). My collaborators and I have proposed a dual-process theory of moral judgment [2,3] that we claim predicts this result. In a Research Focus article published in this issue of Trends in Cognitive Sciences, Moll and de Oliveira-Souza [4] challenge this interpretation. Our theory aims to explain some puzzling patterns in commonsense moral thought. For example, people usually approve of diverting a runaway trolley thatmortally threatens five people onto a side-track, where it will kill only one person. And yet people usually disapprove of pushing someone in front of a runaway trolley, where this will kill the person pushed, but save five others [5]. Our theory, in a nutshell, is this: the thought of pushing someone in front of a trolley elicits a prepotent, negative emotional response (supported in part by the medial prefrontal cortex) that drives moral disapproval [2,3]. People also engage in utilitarian moral reasoning (aggregate cost–benefit analysis), which is likely subserved by the dorsolateral prefrontal cortex (DLPFC) [2,3]. When there is no prepotent emotional response, utilitarian reasoning prevails (as in the first case), but sometimes prepotent emotions and utilitarian reasoning conflict (as in the second case). This conflict is detected by the anterior cingulate cortex, which signals the need for cognitive control, to be implemented in this case by the anterior DLPFC [Brodmann’s Areas (BA) 10/46]. Overriding prepotent emotional responses requires additional cognitive control and, thus, we find increased activity in the anterior DLPFC when people make difficult utilitarian moral judgments [3]. More recent studies support this theory: if negative emotions make people disapprove of pushing the man to his death, then inducing positive emotion might lead to more utilitarian approval, and this is indeed what happens [6]. Likewise, patients with frontotemporal dementia (known for their ‘emotional blunting’) should more readily approve of pushing the man in front of the trolley, and they do [7]. This finding directly foreshadows the hypoemotional VMPFC patients’ utilitarian responses to this and other cases [1]. Finally, we’ve found that cognitive load selectively interferes with utilitarian moral judgment,",
"title": ""
},
{
"docid": "343c71c6013c5684b8860c4386b34526",
"text": "This paper seeks to analyse the extent to which organizations can learn from projects by focusing on the relationship between projects and their organizational context. The paper highlights three dimensions of project-based learning: the practice-based nature of learning, project autonomy and knowledge integration. This analysis generates a number of propositions on the relationship between the learning generated within projects and its transfer to other parts of the organization. In particular, the paper highlights the ‘learning boundaries’ which emerge when learning within projects creates new divisions in practice. These propositions are explored through a comparative analysis of two case studies of construction projects. This analysis suggests that the learning boundaries which develop around projects reflect the nested nature of learning, whereby different levels of learning may substitute for each other. Learning outcomes in the cases can thus be analysed in terms of the interplay between organizational learning and project-level learning. The paper concludes that learning boundaries are an important constraint on attempts to exploit the benefits of projectbased learning for the wider organization.",
"title": ""
},
{
"docid": "5ec8b094cbbbfbbc0632d85b32255c49",
"text": "Pyramidal neurons are characterized by their distinct apical and basal dendritic trees and the pyramidal shape of their soma. They are found in several regions of the CNS and, although the reasons for their abundance remain unclear, functional studies — especially of CA1 hippocampal and layer V neocortical pyramidal neurons — have offered insights into the functions of their unique cellular architecture. Pyramidal neurons are not all identical, but some shared functional principles can be identified. In particular, the existence of dendritic domains with distinct synaptic inputs, excitability, modulation and plasticity appears to be a common feature that allows synapses throughout the dendritic tree to contribute to action-potential generation. These properties support a variety of coincidence-detection mechanisms, which are likely to be crucial for synaptic integration and plasticity.",
"title": ""
}
] |
scidocsrr
|
fabec23d6c0c75a8124fa0c30c2ad4a2
|
The algorithm for getting a UML class diagram from Topological Functioning Model
|
[
{
"docid": "7d079d3354069474accdbd32a6929319",
"text": "Despite the advantages that object technology can provide to the software development community and its customers, the fundamental problems associated with identifying objects, their attributes, and methods remain: it is a largely manual process driven by heuristics that analysts acquire through experience. While a number of methods exist for requirements development and specification, very few tools exist to assist analysts in making the transition from textual descriptions to other notations for object-oriented analysis and other conceptual models. In this paper we describe a methodology and a prototype tool, Linguistic assistant for Domain Analysis (LIDA), which provide linguistic assistance in the model development process. We first present our methodology to conceptual modeling through linguistic analysis. We give an overview of LIDA's functionality and present its technical design and the functionality of its components. We also provide a comparison of LIDA's functionality with that of other research prototypes. Finally, we present an example of how LIDA is used in a conceptual modeling task.",
"title": ""
}
] |
[
{
"docid": "29509d1f63d155dfa63efcf8d4102283",
"text": "The purpose of this work was to determine the effects of varying levels of dietary protein on body composition and muscle protein synthesis during energy deficit (ED). A randomized controlled trial of 39 adults assigned the subjects diets providing protein at 0.8 (recommended dietary allowance; RDA), 1.6 (2×-RDA), and 2.4 (3×-RDA) g kg(-1) d(-1) for 31 d. A 10-d weight-maintenance (WM) period was followed by a 21 d, 40% ED. Body composition and postabsorptive and postprandial muscle protein synthesis were assessed during WM (d 9-10) and ED (d 30-31). Volunteers lost (P<0.05) 3.2 ± 0.2 kg body weight during ED regardless of dietary protein. The proportion of weight loss due to reductions in fat-free mass was lower (P<0.05) and the loss of fat mass was higher (P<0.05) in those receiving 2×-RDA and 3×-RDA compared to RDA. The anabolic muscle response to a protein-rich meal during ED was not different (P>0.05) from WM for 2×-RDA and 3×-RDA, but was lower during ED than WM for those consuming RDA levels of protein (energy × protein interaction, P<0.05). To assess muscle protein metabolic responses to varied protein intakes during ED, RDA served as the study control. In summary, we determined that consuming dietary protein at levels exceeding the RDA may protect fat-free mass during short-term weight loss.",
"title": ""
},
{
"docid": "4287db8deb3c4de5d7f2f5695c3e2e70",
"text": "The brain is complex and dynamic. The spatial scales of interest to the neurobiologist range from individual synapses (approximately 1 microm) to neural circuits (centimeters); the timescales range from the flickering of channels (less than a millisecond) to long-term memory (years). Remarkably, fluorescence microscopy has the potential to revolutionize research on all of these spatial and temporal scales. Two-photon excitation (2PE) laser scanning microscopy allows high-resolution and high-sensitivity fluorescence microscopy in intact neural tissue, which is hostile to traditional forms of microscopy. Over the last 10 years, applications of 2PE, including microscopy and photostimulation, have contributed to our understanding of a broad array of neurobiological phenomena, including the dynamics of single channels in individual synapses and the functional organization of cortical maps. Here we review the principles of 2PE microscopy, highlight recent applications, discuss its limitations, and point to areas for future research and development.",
"title": ""
},
{
"docid": "cd230b3fa34267564380bdd0abe55c74",
"text": "Healthcare data are a valuable source of healthcare intelligence. Sharing of healthcare data is one essential step to make healthcare system smarter and improve the quality of healthcare service. Healthcare data, one personal asset of patient, should be owned and controlled by patient, instead of being scattered in different healthcare systems, which prevents data sharing and puts patient privacy at risks. Blockchain is demonstrated in the financial field that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we proposed an App (called Healthcare Data Gateway (HGD)) architecture based on blockchain to enable patient to own, control and share their own data easily and securely without violating privacy, which provides a new potential way to improve the intelligence of healthcare systems while keeping patient data private. Our proposed purpose-centric access model ensures patient own and control their healthcare data; simple unified Indicator-Centric Schema (ICS) makes it possible to organize all kinds of personal healthcare data practically and easily. We also point out that MPC (Secure Multi-Party Computing) is one promising solution to enable untrusted third-party to conduct computation over patient data without violating privacy.",
"title": ""
},
{
"docid": "d21213e0dbef657d5e7ec8689fe427ed",
"text": "Cutaneous infections due to Listeria monocytogenes are rare. Typically, infections manifest as nonpainful, nonpruritic, self-limited, localized, papulopustular or vesiculopustular eruptions in healthy persons. Most cases follow direct inoculation of the skin in veterinarians or farmers who have exposure to animal products of conception. Less commonly, skin lesions may arise from hematogenous dissemination in compromised hosts with invasive disease. Here, we report the first case in a gardener that occurred following exposure to soil and vegetation.",
"title": ""
},
{
"docid": "c27aee0b72f3e8239915a8d33c060e96",
"text": "Advances in artificial impedance surface conformal antennas are presented. A detailed conical impedance modulation is proposed for the first time. By coating an artificial impedance surface on a cone, we can control the conical surface wave radiating at the desired direction. The surface impedance is constructed by printing a dense texture of sub wavelength metal patches on a grounded dielectric slab. The effective surface impedance depends on the size of the patches, and can be varied as a function of position. The final devices are conical conformal antennas with simple layout and feeding. Simulated results are presented, and better aperture efficiency and lower side lobe level are obtained than our predecessors [2].",
"title": ""
},
{
"docid": "8427181b5e0596ec6ed954722808a78b",
"text": "Yong Khoo, Sang Chung This paper presents an automated method for 3D character skeleton extraction that can be applied for generic 3D shapes. Our work is motivated by the skeleton-based prior work on automatic rigging focused on skeleton extraction and can automatically aligns the extracted structure to fit the 3D shape of the given 3D mesh. The body mesh can be subsequently skinned based on the extracted skeleton and thus enables rigging process. In the experiment, we apply public dataset to drive the estimated skeleton from different body shapes, as well as the real data obtained from 3D scanning systems. Satisfactory results are obtained compared to the existing approaches.",
"title": ""
},
{
"docid": "f11a88cad05210e26940e79700b0ca11",
"text": "Agile software development methods provide great flexibility to adapt to changing requirements and rapidly market products. Sri Lankan software organizations too are embracing these methods to develop software products. Being an iterative an incremental software engineering methodology, agile philosophy promotes working software over comprehensive documentation and heavily relies on continuous customer collaboration throughout the life cycle of the product. Hence characteristics of the people involved with the project and their working environment plays an important role in the success of an agile project compared to any other software engineering methodology. This study investigated the factors that lead to the success of a project that adopts agile methodology in Sri Lanka. An online questionnaire was used to collect data to identify people and organizational factors that lead to project success. The sample consisted of Sri Lankan software professionals with several years of industry experience in developing projects using agile methods. According to the statistical data analysis, customer satisfaction, customer commitment, team size, corporate culture, technical competency, decision time, customer commitment and training and learning have a influence on the success of the project.",
"title": ""
},
{
"docid": "28415a26b69057231f1cd063e3dbed40",
"text": "OBJECTIVE\nTo determine if ovariectomy (OVE) is a safe alternative to ovariohysterectomy (OVH) for canine gonadectomy.\n\n\nSTUDY DESIGN\nLiterature review.\n\n\nMETHODS\nAn on-line bibliographic search in MEDLINE and PubMed was performed in December 2004, covering the period 1969-2004. Relevant studies were compared and evaluated with regard to study design, surgical technique, and both short-term and long-term follow-up.\n\n\nCONCLUSIONS\nOVH is technically more complicated, time consuming, and is probably associated with greater morbidity (larger incision, more intraoperative trauma, increased discomfort) compared with OVE. No significant differences between techniques were observed for incidence of long-term urogenital problems, including endometritis/pyometra and urinary incontinence, making OVE the preferred method of gonadectomy in the healthy bitch.\n\n\nCLINICAL RELEVANCE\nCanine OVE can replace OVH as the procedure of choice for routine neutering of healthy female dogs.",
"title": ""
},
{
"docid": "91c5e8e0b5bdcfa66d2f302128eacef1",
"text": "PURPOSE OF REVIEW\nThis article evaluates the current status of the gut barrier in gastrointestinal disorders.\n\n\nRECENT FINDINGS\nThe gut barrier is a complex, multicomponent, interactive, and bidirectional entity that includes, but is not restricted to, the epithelial cell layer. Intestinal permeability, the phenomenon most readily and commonly studied, reflects just one (albeit an important one) function of the barrier that is intimately related to and interacts with luminal contents, including the microbiota. The mucosal immune response also influences barrier integrity; effects of inflammation per se must be accounted for in the interpretation of permeability studies in disease states.\n\n\nSUMMARY\nAlthough several aspects of barrier function can be assessed in man, one must be aware of exactly what a given test measures, as well as of its limitations. The temptation to employ results from a test of paracellular flux to imply a role for barrier dysfunction in disorders thought to be based on bacterial or macromolecular translocation must be resisted. Although changes in barrier function have been described in several gastrointestinal disorders, their primacy remains to be defined. At present, few studies support efficacy for an intervention that improves barrier function in altering the natural history of a disease process.",
"title": ""
},
{
"docid": "81a1504505fa4630af771ccf6ed8404d",
"text": "A method for the simultaneous co-registration and georeferencing of multiple 3D pointclouds and associated intensity information is proposed. It is a generalization of the 3D surface matching problem. The simultaneous co-registration provides for a strict solution to the problem, as opposed to sequential pairwise registration. The problem is formulated as the Least Squares matching of overlapping 3D surfaces. The parameters of 3D transformations of multiple surfaces are simultaneously estimated, using the Generalized GaussMarkoff model, minimizing the sum of squares of the Euclidean distances among the surfaces. An observation equation is written for each surface-to-surface correspondence. Each overlapping surface pair contributes a group of observation equations to the design matrix. The parameters are introduced into the system as stochastic variables, as a second type of (fictitious) observations. This extension allows to control the estimated parameters. Intensity information is introduced into the system in the form of quasisurfaces as the third type of observations. Reference points, defining an external (object) coordinate system, which are imaged in additional intensity images, or can be located in the pointcloud, serve as the fourth type of observations. They transform the whole block of “models” to a unique reference system. Furthermore, the given coordinate values of the control points are treated as observations. This gives the fifth type of observations. The total system is solved by applying the Least Squares technique, provided that sufficiently good initial values for the transformation parameters are given. This method can be applied to data sets generated from aerial as well as terrestrial laser scanning or other pointcloud generating methods. * Corresponding author. www.photogrammetry.ethz.ch",
"title": ""
},
{
"docid": "ef08ef786fd759b33a7d323c69be19db",
"text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.",
"title": ""
},
{
"docid": "84440568cdae970ba532df5501ff7781",
"text": "Present work deals with the biotechnological production of fuel ethanol from different raw materials. The different technologies for producing fuel ethanol from sucrose-containing feedstocks (mainly sugar cane), starchy materials and lignocellulosic biomass are described along with the major research trends for improving them. The complexity of the biomass processing is recognized through the analysis of the different stages involved in the conversion of lignocellulosic complex into fermentable sugars. The features of fermentation processes for the three groups of studied feedstocks are discussed. Comparative indexes for the three major types of feedstocks for fuel ethanol production are presented. Finally, some concluding considerations on current research and future tendencies in the production of fuel ethanol regarding the pretreatment and biological conversion of the feedstocks are presented.",
"title": ""
},
{
"docid": "27afd0280e81b731eb434ef174ffd9b2",
"text": "This paper presents a review of recently used direct torque and flux control (DTC) techniques for voltage inverter-fed induction and permanent-magnet synchronous motors. A variety of techniques, different in concept, are described as follows: switching-table-based hysteresis DTC, direct self control, constant-switching-frequency DTC with space-vector modulation (DTC-SVM). Also, trends in the DTC-SVM techniques based on neuro-fuzzy logic controllers are presented. Some oscillograms that illustrate properties of the presented techniques are shown.",
"title": ""
},
{
"docid": "9beff0659cc5aad37097d212caaeef40",
"text": "Mobile cloud computing (MC2) is emerging as a promising computing paradigm which helps alleviate the conflict between resource-constrained mobile devices and resource-consuming mobile applications through computation offloading. In this paper, we analyze the computation offloading problem in cloudlet-based mobile cloud computing. Different from most of the previous works which are either from the perspective of a single user or under the setting of a single wireless access point (AP), we research the computation offloading strategy of multiple users via multiple wireless APs. With the widespread deployment of WLAN, offloading via multiple wireless APs will obtain extensive application. Taking energy consumption and delay (including computing and transmission delay) into account, we present a game-theoretic analysis of the computation offloading problem while mimicking the selfish nature of the individuals. In the case of homogeneous mobile users, conditions of Nash equilibrium are analyzed, and an algorithm that admits a Nash equilibrium is proposed. For heterogeneous users, we prove the existence of Nash equilibrium by introducing the definition of exact potential game and design a distributed computation offloading algorithm to help mobile users choose proper offloading strategies. Numerical extensive simulations have been conducted and results demonstrate that the proposed algorithm can achieve desired system performance.",
"title": ""
},
{
"docid": "a9b02760d241119384720e34dc1045ef",
"text": "The complex impacts of disease stages and disease symptoms on spectral characteristics of the plants lead to limitation in disease severity detection using the spectral vegetation indices (SVIs). Although machine learning techniques have been utilized for vegetation parameters estimation and disease detection, the effects of disease symptoms on their performances have been less considered. Hence, this paper investigated on 1) using partial least square regression (PLSR), v support vector regression (v-SVR), and Gaussian process regression (GPR) methods for wheat leaf rust disease detection, 2) evaluating the impact of training sample size on the results, 3) the influence of disease symptoms effects on the predictions performances of the above-mentioned methods, and 4) comparisons between the performances of SVIs and machine learning techniques. In this study, the spectra of the infected and non infected leaves in different disease symptoms were measured using a non imaging spectroradiometer in the electromagnetic region of 350 to 2500 nm. In order to produce a ground truth dataset, we employed photos of a digital camera to compute the disease severity and disease symptoms fractions. Then, different sample sizes of collected datasets were utilized to train each method. PLSR showed coefficient of determination (R2) values of 0.98 (root mean square error (RMSE) = 0.6) and 0.92 (RMSE = 0.11) at leaf and canopy, respectively. SVR showed R2 and RMSE close to PLSR at leaf (R2 = 0.98, RMSE = 0.05) and canopy (R2 = 0.95, RMSE = 0.12) scales. GPR showed R2 values of 0.98 (RMSE = 0.03) and 0.97 (RMSE = 0.11) at leaf and canopy scale, respectively. Moreover, GPR represents better performances than others using small training sample size. The results represent that the machine learning techniques in contrast to SVIs are not sensitive to different disease symptoms and their results are reliable.",
"title": ""
},
{
"docid": "1a7dd0fb317a9640ee6e90036d6036fa",
"text": "A genome-wide association study was performed to identify genetic factors involved in susceptibility to psoriasis (PS) and psoriatic arthritis (PSA), inflammatory diseases of the skin and joints in humans. 223 PS cases (including 91 with PSA) were genotyped with 311,398 single nucleotide polymorphisms (SNPs), and results were compared with those from 519 Northern European controls. Replications were performed with an independent cohort of 577 PS cases and 737 controls from the U.S., and 576 PSA patients and 480 controls from the U.K.. Strongest associations were with the class I region of the major histocompatibility complex (MHC). The most highly associated SNP was rs10484554, which lies 34.7 kb upstream from HLA-C (P = 7.8x10(-11), GWA scan; P = 1.8x10(-30), replication; P = 1.8x10(-39), combined; U.K. PSA: P = 6.9x10(-11)). However, rs2395029 encoding the G2V polymorphism within the class I gene HCP5 (combined P = 2.13x10(-26) in U.S. cases) yielded the highest ORs with both PS and PSA (4.1 and 3.2 respectively). This variant is associated with low viral set point following HIV infection and its effect is independent of rs10484554. We replicated the previously reported association with interleukin 23 receptor and interleukin 12B (IL12B) polymorphisms in PS and PSA cohorts (IL23R: rs11209026, U.S. PS, P = 1.4x10(-4); U.K. PSA: P = 8.0x10(-4); IL12B:rs6887695, U.S. PS, P = 5x10(-5) and U.K. PSA, P = 1.3x10(-3)) and detected an independent association in the IL23R region with a SNP 4 kb upstream from IL12RB2 (P = 0.001). Novel associations replicated in the U.S. PS cohort included the region harboring lipoma HMGIC fusion partner (LHFP) and conserved oligomeric golgi complex component 6 (COG6) genes on chromosome 13q13 (combined P = 2x10(-6) for rs7993214; OR = 0.71), the late cornified envelope gene cluster (LCE) from the Epidermal Differentiation Complex (PSORS4) (combined P = 6.2x10(-5) for rs6701216; OR 1.45) and a region of LD at 15q21 (combined P = 2.9x10(-5) for rs3803369; OR = 1.43). This region is of interest because it harbors ubiquitin-specific protease-8 whose processed pseudogene lies upstream from HLA-C. This region of 15q21 also harbors the gene for SPPL2A (signal peptide peptidase like 2a) which activates tumor necrosis factor alpha by cleavage, triggering the expression of IL12 in human dendritic cells. We also identified a novel PSA (and potentially PS) locus on chromosome 4q27. This region harbors the interleukin 2 (IL2) and interleukin 21 (IL21) genes and was recently shown to be associated with four autoimmune diseases (Celiac disease, Type 1 diabetes, Grave's disease and Rheumatoid Arthritis).",
"title": ""
},
{
"docid": "ec19face14810817bfd824d70a11c746",
"text": "The article deals with various ways of memristor modeling and simulation in the MATLAB&Simulink environment. Recently used and published mathematical memristor model serves as a base, regarding all known features of its behavior. Three different approaches in the MATLAB&Simulink system are used for the differential and other equations formulation. The first one employs the standard system core offer for the Ordinary Differential Equations solutions (ODE) in the form of an m-file. The second approach is the model construction in Simulink environment. The third approach employs so-called physical modeling using the built-in Simscape system. The output data are the basic memristor characteristics and appropriate time courses. The features of all models are discussed, especially regarding the computer simulation. Possible problems that may occur during modeling are pointed. Key-Words: memristor, modeling and simulation, MATLAB, Simulink, Simscape, physical model",
"title": ""
},
{
"docid": "99b57c1396eae76aa4fe01d466193f1f",
"text": "This paper presents the analysis and discussion of the off-site localization competition track, which took place during the Seventh International Conference on Indoor Positioning and Indoor Navigation (IPIN 2016). Five international teams proposed different strategies for smartphone-based indoor positioning using the same reference data. The competitors were provided with several smartphone-collected signal datasets, some of which were used for training (known trajectories), and others for evaluating (unknown trajectories). The competition permits a coherent evaluation method of the competitors' estimations, where inside information to fine-tune their systems is not offered, and thus provides, in our opinion, a good starting point to introduce a fair comparison between the smartphone-based systems found in the literature. The methodology, experience, feedback from competitors and future working lines are described.",
"title": ""
},
{
"docid": "3e97e8be1ab2f2a056fdccbcd350f522",
"text": "Backchannel responses like “uh-huh”, “yeah”, “right” are used by the listener in a social dialog as a way to provide feedback to the speaker. In the context of human-computer interaction, these responses can be used by an artificial agent to build rapport in conversations with users. In the past, multiple approaches have been proposed to detect backchannel cues and to predict the most natural timing to place those backchannel utterances. Most of these are based on manually optimized fixed rules, which may fail to generalize. Many systems rely on the location and duration of pauses and pitch slopes of specific lengths. In the past, we proposed an approach by training artificial neural networks on acoustic features such as pitch and power and also attempted to add word embeddings via word2vec. In this work, we refined this approach by evaluating different methods to add timed word embeddings via word2vec. Comparing the performance using various feature combinations, we could show that adding linguistic features improves the performance over a prediction system that only uses acoustic features.",
"title": ""
},
{
"docid": "c1045a302fc2340a0334f7ee58349aa6",
"text": "Relatively brief interventions have consistently been found to be effective in reducing alcohol consumption or achieving treatment referral of problem drinkers. To date, the literature includes at least a dozen randomized trials of brief referral or retention procedures, and 32 controlled studies of brief interventions targeting drinking behavior, enrolling over 6000 problem drinkers in both health care and treatment settings across 14 nations. These studies indicate that brief interventions are more effective than no counseling, and often as effective as more extensive treatment. The outcome literature is reviewed, and common motivational elements of effective brief interventions are described. There is encouraging evidence that the course of harmful alcohol use can be effectively altered by well-designed intervention strategies which are feasible within relatively brief-contact contexts such as primary health care settings and employee assistance programs. Implications for future research and practice are considered.",
"title": ""
}
] |
scidocsrr
|
3d64739572b4db24f15ed648fc62cdd5
|
An Empirical Evaluation of Similarity Measures for Time Series Classification
|
[
{
"docid": "ceca5552bcb7a5ebd0b779737bc68275",
"text": "In a way similar to the string-to-string correction problem, we address discrete time series similarity in light of a time-series-to-time-series-correction problem for which the similarity between two time series is measured as the minimum cost sequence of edit operations needed to transform one time series into another. To define the edit operations, we use the paradigm of a graphical editing process and end up with a dynamic programming algorithm that we call time warp edit distance (TWED). TWED is slightly different in form from dynamic time warping (DTW), longest common subsequence (LCSS), or edit distance with real penalty (ERP) algorithms. In particular, it highlights a parameter that controls a kind of stiffness of the elastic measure along the time axis. We show that the similarity provided by TWED is a potentially useful metric in time series retrieval applications since it could benefit from the triangular inequality property to speed up the retrieval process while tuning the parameters of the elastic measure. In that context, a lower bound is derived to link the matching of time series into down sampled representation spaces to the matching into the original space. The empiric quality of the TWED distance is evaluated on a simple classification task. Compared to edit distance, DTW, LCSS, and ERP, TWED has proved to be quite effective on the considered experimental task.",
"title": ""
},
{
"docid": "510a43227819728a77ff0c7fa06fa2d0",
"text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.",
"title": ""
}
] |
[
{
"docid": "1349c5daedd71bdfccaa0ea48b3fd54a",
"text": "OBJECTIVE\nCraniosacral therapy (CST) is an alternative treatment approach, aiming to release restrictions around the spinal cord and brain and subsequently restore body function. A previously conducted systematic review did not obtain valid scientific evidence that CST was beneficial to patients. The aim of this review was to identify and critically evaluate the available literature regarding CST and to determine the clinical benefit of CST in the treatment of patients with a variety of clinical conditions.\n\n\nMETHODS\nComputerised literature searches were performed in Embase/Medline, Medline(®) In-Process, The Cochrane library, CINAHL, and AMED from database start to April 2011. Studies were identified according to pre-defined eligibility criteria. This included studies describing observational or randomised controlled trials (RCTs) in which CST as the only treatment method was used, and studies published in the English language. The methodological quality of the trials was assessed using the Downs and Black checklist.\n\n\nRESULTS\nOnly seven studies met the inclusion criteria, of which three studies were RCTs and four were of observational study design. Positive clinical outcomes were reported for pain reduction and improvement in general well-being of patients. Methodological Downs and Black quality scores ranged from 2 to 22 points out of a theoretical maximum of 27 points, with RCTs showing the highest overall scores.\n\n\nCONCLUSION\nThis review revealed the paucity of CST research in patients with different clinical pathologies. CST assessment is feasible in RCTs and has the potential of providing valuable outcomes to further support clinical decision making. However, due to the current moderate methodological quality of the included studies, further research is needed.",
"title": ""
},
{
"docid": "1de19775f0c32179f59674c7f0d8b540",
"text": "As the most commonly used bots in first-person shooter (FPS) online games, aimbots are notoriously difficult to detect because they are completely passive and resemble excellent honest players in many aspects. In this paper, we conduct the first field measurement study to understand the status quo of aimbots and how they play in the wild. For data collection purpose, we devise a novel and generic technique called baittarget to accurately capture existing aimbots from the two most popular FPS games. Our measurement reveals that cheaters who use aimbots cannot play as skillful as excellent honest players in all aspects even though aimbots can help them to achieve very high shooting performance. To characterize the unskillful and blatant nature of cheaters, we identify seven features, of which six are novel, and these features cannot be easily mimicked by aimbots. Leveraging this set of features, we propose an accurate and robust server-side aimbot detector called AimDetect. The core of AimDetect is a cascaded classifier that detects the inconsistency between performance and skillfulness of aimbots. We evaluate the efficacy and generality of AimDetect using the real game traces. Our results show that AimDetect can capture almost all of the aimbots with very few false positives and minor overhead.",
"title": ""
},
{
"docid": "961cc1dc7063706f8f66fc136da41661",
"text": "From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible \"statistical\" properties that are the object of learning. Much less attention has been given to defining what \"learning\" is in the context of \"statistical learning.\" One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.",
"title": ""
},
{
"docid": "3d56b369e10b29969132c44897d4cc4c",
"text": "Real-world object classes appear in imbalanced ratios. This poses a significant challenge for classifiers which get biased towards frequent classes. We hypothesize that improving the generalization capability of a classifier should improve learning on imbalanced datasets. Here, we introduce the first hybrid loss function that jointly performs classification and clustering in a single formulation. Our approach is based on an ‘affinity measure’ in Euclidean space that leads to the following benefits: (1) direct enforcement of maximum margin constraints on classification boundaries, (2) a tractable way to ensure uniformly spaced and equidistant cluster centers, (3) flexibility to learn multiple class prototypes to support diversity and discriminability in feature space. Our extensive experiments demonstrate the significant performance improvements on visual classification and verification tasks on multiple imbalanced datasets. The proposed loss can easily be plugged in any deep architecture as a differentiable block and demonstrates robustness against different levels of data imbalance and corrupted labels.",
"title": ""
},
{
"docid": "1ebf198459b98048404b706e4852eae2",
"text": "Network forensics is a branch of digital forensics, which applies to network security. It is used to relate monitoring and analysis of the computer network traffic, that helps us in collecting information and digital evidence, for the protection of network that can use as firewall and IDS. Firewalls and IDS can't always prevent and find out the unauthorized access within a network. This paper presents an extensive survey of several forensic frameworks. There is a demand of a system which not only detects the complex attack, but also it should be able to understand what had happened. Here it talks about the concept of the distributed network forensics. The concept of the Distributed network forensics is based on the distributed techniques, which are useful for providing an integrated platform for the automatic forensic evidence gathering and important data storage, valuable support and an attack attribution graph generation mechanism to depict hacking events.",
"title": ""
},
{
"docid": "fd0e31b2675a797c26af731ef1ff22df",
"text": "State representations critically affect the effectiveness of learning in robots. In this paper, we propose a roboticsspecific approach to learning such state representations. Robots accomplish tasks by interacting with the physical world. Physics in turn imposes structure on both the changes in the world and on the way robots can effect these changes. Using prior knowledge about interacting with the physical world, robots can learn state representations that are consistent with physics. We identify five robotic priors and explain how they can be used for representation learning. We demonstrate the effectiveness of this approach in a simulated slot car racing task and a simulated navigation task with distracting moving objects. We show that our method extracts task-relevant state representations from highdimensional observations, even in the presence of task-irrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning.",
"title": ""
},
{
"docid": "98b4e2d51efde6f4f8c43c29650b8d2f",
"text": "New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e.g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only nice to have but is in fact a necessary tool for designing embodied agents.",
"title": ""
},
{
"docid": "209203c297898a2251cfd62bdfc37296",
"text": "Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computerbased problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and differences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.",
"title": ""
},
{
"docid": "7735668d4f8407d9514211d9f5492ce6",
"text": "This revision to the EEG Guidelines is an update incorporating current EEG technology and practice. The role of the EEG in making the determination of brain death is discussed as are suggested technical criteria for making the diagnosis of electrocerebral inactivity.",
"title": ""
},
{
"docid": "e83227e0485cf7f3ba19ce20931bbc2f",
"text": "There has been an increased global demand for dermal filler injections in recent years. Although hyaluronic acid-based dermal fillers generally have a good safety profile, serious vascular complications have been reported. Here we present a typical case of skin necrosis following a nonsurgical rhinoplasty using hyaluronic acid filler. Despite various rescuing managements, unsightly superficial scars were left. It is critical for plastic surgeons and dermatologists to be familiar with the vascular anatomy and the staging of vascular complications. Any patients suspected to experience a vascular complication should receive early management under close monitoring. Meanwhile, the potentially devastating outcome caused by illegal practice calls for stricter regulations and law enforcement.",
"title": ""
},
{
"docid": "d559ace14dcc42f96d0a96b959a92643",
"text": "Graphs are an integral data structure for many parts of computation. They are highly effective at modeling many varied and flexible domains, and are excellent for representing the way humans themselves conceive of the world. Nowadays, there is lots of interest in working with large graphs, including social network graphs, “knowledge” graphs, and large bipartite graphs (for example, the Netflix movie matching graph).",
"title": ""
},
{
"docid": "f8093849e9157475149d00782c60ae60",
"text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.",
"title": ""
},
{
"docid": "9766e0507346e46e24790a4873979aa4",
"text": "Extreme learning machine (ELM) is proposed for solving a single-layer feed-forward network (SLFN) with fast learning speed and has been confirmed to be effective and efficient for pattern classification and regression in different fields. ELM originally focuses on the supervised, semi-supervised, and unsupervised learning problems, but just in the single domain. To our best knowledge, ELM with cross-domain learning capability in subspace learning has not been exploited very well. Inspired by a cognitive-based extreme learning machine technique (Cognit Comput. 6:376–390, 1; Cognit Comput. 7:263–278, 2.), this paper proposes a unified subspace transfer framework called cross-domain extreme learning machine (CdELM), which aims at learning a common (shared) subspace across domains. Three merits of the proposed CdELM are included: (1) A cross-domain subspace shared by source and target domains is achieved based on domain adaptation; (2) ELM is well exploited in the cross-domain shared subspace learning framework, and a new perspective is brought for ELM theory in heterogeneous data analysis; (3) the proposed method is a subspace learning framework and can be combined with different classifiers in recognition phase, such as ELM, SVM, nearest neighbor, etc. Experiments on our electronic nose olfaction datasets demonstrate that the proposed CdELM method significantly outperforms other compared methods.",
"title": ""
},
{
"docid": "9faf67646394dfedfef1b6e9152d9cf6",
"text": "Acoustic shooter localization systems are being rapidly deployed in the field. However, these are standalone systems---either wearable or vehicle-mounted---that do not have networking capability even though the advantages of widely distributed sensing for locating shooters have been demonstrated before. The reason for this is that certain disadvantages of wireless network-based prototypes made them impractical for the military. The system that utilized stationary single-channel sensors required many sensor nodes, while the multi-channel wearable version needed to track the absolute self-orientation of the nodes continuously, a notoriously hard task. This paper presents an approach that overcomes the shortcomings of past approaches. Specifically, the technique requires as few as five single-channel wireless sensors to provide accurate shooter localization and projectile trajectory estimation. Caliber estimation and weapon classification are also supported. In addition, a single node alone can provide reliable miss distance and range estimates based on a single shot as long as a reasonable assumption holds. The main contribution of the work and the focus of this paper is the novel sensor fusion technique that works well with a limited number of observations. The technique is thoroughly evaluated using an extensive shot library.",
"title": ""
},
{
"docid": "1b0cb70fb25d86443a01a313371a27ae",
"text": "We present a protocol for general state machine replication – a method that provides strong consistency – that has high performance in a wide-area network. In particular, our protocol Mencius has high throughput under high client load and low latency under low client load even under changing wide-area network environment and client load. We develop our protocol as a derivation from the well-known protocol Paxos. Such a development can be changed or further refined to take advantage of specific network or application requirements.",
"title": ""
},
{
"docid": "b36549a4b16c2c8ab50f1adda99f3120",
"text": "Spatial representations of time are a ubiquitous feature of human cognition. Nevertheless, interesting sociolinguistic variations exist with respect to where in space people locate temporal constructs. For instance, while in English time metaphorically flows horizontally, in Mandarin an additional vertical dimension is employed. Noting that the bilingual mind can flexibly accommodate multiple representations, the present work explored whether Mandarin-English bilinguals possess two mental time lines. Across two experiments, we demonstrated that Mandarin-English bilinguals do indeed employ both horizontal and vertical representations of time. Importantly, subtle variations to cultural context were seen to shape how these time lines were deployed.",
"title": ""
},
{
"docid": "41611606af8671f870fb90e50c2e99fc",
"text": "Pointwise label and pairwise label are both widely used in computer vision tasks. For example, supervised image classification and annotation approaches use pointwise label, while attribute-based image relative learning often adopts pairwise labels. These two types of labels are often considered independently and most existing efforts utilize them separately. However, pointwise labels in image classification and tag annotation are inherently related to the pairwise labels. For example, an image labeled with \"coast\" and annotated with \"beach, sea, sand, sky\" is more likely to have a higher ranking score in terms of the attribute \"open\", while \"men shoes\" ranked highly on the attribute \"formal\" are likely to be annotated with \"leather, lace up\" than \"buckle, fabric\". The existence of potential relations between pointwise labels and pairwise labels motivates us to fuse them together for jointly addressing related vision tasks. In particular, we provide a principled way to capture the relations between class labels, tags and attributes, and propose a novel framework PPP(Pointwise and Pairwise image label Prediction), which is based on overlapped group structure extracted from the pointwise-pairwise-label bipartite graph. With experiments on benchmark datasets, we demonstrate that the proposed framework achieves superior performance on three vision tasks compared to the state-of-the-art methods.",
"title": ""
},
{
"docid": "dc93d2204ff27c7d55a71e75d2ae4ca9",
"text": "Locating and securing an Alzheimer's patient who is outdoors and in wandering state is crucial to patient's safety. Although advances in geotracking and mobile technology have made locating patients instantly possible, reaching them while in wandering state may take time. However, a social network of caregivers may help shorten the time that it takes to reach and secure a wandering AD patient. This study proposes a new type of intervention based on novel mobile application architecture to form and direct a social support network of caregivers for locating and securing wandering patients as soon as possible. System employs, aside from the conventional tracking mechanism, a wandering detection mechanism, both of which operates through a tracking device installed a Subscriber Identity Module for Global System for Mobile Communications Network(GSM). System components are being implemented using Java. Family caregivers will be interviewed prior to and after the use of the system and Center For Epidemiologic Studies Depression Scale, Patient Health Questionnaire and Zarit Burden Interview will be applied to them during these interviews to find out the impact of the system in terms of depression, anxiety and burden, respectively.",
"title": ""
},
{
"docid": "acb3689c9ece9502897cebb374811f54",
"text": "In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.",
"title": ""
}
] |
scidocsrr
|
86f45dfa1f6b662bcd03b42bf94e864e
|
A Fully Unsupervised Word Sense Disambiguation Method Using Dependency Knowledge
|
[
{
"docid": "800337ef10a4245db4e45a1a5931e578",
"text": "This paper describes a method for generating sense-tagged data using Wikipedia as a source of sense annotations. Through word sense disambiguation experiments, we show that the Wikipedia-based sense annotations are reliable and can be used to construct accurate sense classifiers.",
"title": ""
}
] |
[
{
"docid": "543348825e8157926761b2f6a7981de2",
"text": "With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones. Specifically, we propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model. To cast ISTA into deep network form, we develop an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms. All the parameters in ISTA-Net (e.g. nonlinear transforms, shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than being hand-crafted. Moreover, considering that the residuals of natural images are more compressible, an enhanced version of ISTA-Net in the residual domain, dubbed ISTA-Net+, is derived to further improve CS reconstruction. Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform existing state-of-the-art optimization-based and network-based CS methods by large margins, while maintaining fast computational speed. Our source codes are available: http://jianzhang.tech/projects/ISTA-Net.",
"title": ""
},
{
"docid": "c8dae180aae646bf00e202bd24f15f59",
"text": "Massively Multiplayer Online Games (MMOGs) continue to be a popular and lucrative sector of the gaming market. Project Massive was created to assess MMOG players' social experiences both inside and outside of their gaming environments and the impact of these activities on their everyday lives. The focus of Project Massive has been on the persistent player groups or \"guilds\" that form in MMOGs. The survey has been completed online by 1836 players, who reported on their play patterns, commitment to their player organizations, and personality traits like sociability, extraversion and depression. Here we report our cross-sectional findings and describe our future longitudinal work as we track players and their guilds across the evolving landscape of the MMOG product space.",
"title": ""
},
{
"docid": "d9c514f3e1089f258732eef4a949fe55",
"text": "Shading is a tedious process for artists involved in 2D cartoon and manga production given the volume of contents that the artists have to prepare regularly over tight schedule. While we can automate shading production with the presence of geometry, it is impractical for artists to model the geometry for every single drawing. In this work, we aim to automate shading generation by analyzing the local shapes, connections, and spatial arrangement of wrinkle strokes in a clean line drawing. By this, artists can focus more on the design rather than the tedious manual editing work, and experiment with different shading effects under different conditions. To achieve this, we have made three key technical contributions. First, we model five perceptual cues by exploring relevant psychological principles to estimate the local depth profile around strokes. Second, we formulate stroke interpretation as a global optimization model that simultaneously balances different interpretations suggested by the perceptual cues and minimizes the interpretation discrepancy. Lastly, we develop a wrinkle-aware inflation method to generate a height field for the surface to support the shading region computation. In particular, we enable the generation of two commonly-used shading styles: 3D-like soft shading and manga-style flat shading.",
"title": ""
},
{
"docid": "538f3c049ee33c7e8f1895a97dc7a808",
"text": "For companies and their employees, social media allows new ways to communicate with customers and colleagues. Vast amounts of information are being exchanged in social media. Information is a highly valuable asset, and therefore questions concerning information security become more and more important. Companies are becoming increasingly worried about information security in social media, but so far, this issue has not been studied. The present research closes this gap by studying the information security challenges social media represents for organizations. The research was conducted as a qualitative multiple case study for which information security managers from eleven public and private companies in one European country were interviewed. The study has three main findings. First, challenges arising from employees’ actions or unawareness in social media (especially reputation damage) seem to represent bigger threats to information security than threats caused by outside attacks. Second, the confusion of private and professional roles in social media represents an information security risk, and distinguishing between these roles becomes more difficult the higher an employee’s position in the company. Third, communication with employees and colleagues represents an information security challenge especially when communication is not steered by the company. Implications for research and practice are discussed.",
"title": ""
},
{
"docid": "b3385e52f87a9c3a05d15b90231d3efe",
"text": "A non-reward attractor theory of depression is proposed based on the operation of the lateral orbitofrontal cortex and supracallosal cingulate cortex. The orbitofrontal cortex contains error neurons that respond to non-reward for many seconds in an attractor state that maintains a memory of the non-reward. The human lateral orbitofrontal cortex is activated by non-reward during reward reversal, and by a signal to stop a response that is now incorrect. Damage to the human orbitofrontal cortex impairs reward reversal learning. Not receiving reward can produce depression. The theory proposed is that in depression, this lateral orbitofrontal cortex non-reward system is more easily triggered, and maintains its attractor-related firing for longer. This triggers negative cognitive states, which in turn have positive feedback top-down effects on the orbitofrontal cortex non-reward system. Treatments for depression, including ketamine, may act in part by quashing this attractor. The mania of bipolar disorder is hypothesized to be associated with oversensitivity and overactivity in the reciprocally related reward system in the medial orbitofrontal cortex and pregenual cingulate cortex.",
"title": ""
},
{
"docid": "3813fd345ba9f3c19303c64db1b7e9b2",
"text": "In recent years, statistical learning (SL) research has seen a growing interest in tracking individual performance in SL tasks, mainly as a predictor of linguistic abilities. We review studies from this line of research and outline three presuppositions underlying the experimental approach they employ: (i) that SL is a unified theoretical construct; (ii) that current SL tasks are interchangeable, and equally valid for assessing SL ability; and (iii) that performance in the standard forced-choice test in the task is a good proxy of SL ability. We argue that these three critical presuppositions are subject to a number of theoretical and empirical issues. First, SL shows patterns of modality- and informational-specificity, suggesting that SL cannot be treated as a unified construct. Second, different SL tasks may tap into separate sub-components of SL that are not necessarily interchangeable. Third, the commonly used forced-choice tests in most SL tasks are subject to inherent limitations and confounds. As a first step, we offer a methodological approach that explicitly spells out a potential set of different SL dimensions, allowing for better transparency in choosing a specific SL task as a predictor of a given linguistic outcome. We then offer possible methodological solutions for better tracking and measuring SL ability. Taken together, these discussions provide a novel theoretical and methodological approach for assessing individual differences in SL, with clear testable predictions.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'.",
"title": ""
},
{
"docid": "c32d61da51308397d889db143c3e6f9d",
"text": "Children’s neurological development is influenced by their experiences. Early experiences and the environments in which they occur can alter gene expression and affect long-term neural development. Today, discretionary screen time, often involving multiple devices, is the single main experience and environment of children. Various screen activities are reported to induce structural and functional brain plasticity in adults. However, childhood is a time of significantly greater changes in brain anatomical structure and connectivity. There is empirical evidence that extensive exposure to videogame playing during childhood may lead to neuroadaptation and structural changes in neural regions associated with addiction. Digital natives exhibit a higher prevalence of screen-related ‘addictive’ behaviour that reflect impaired neurological rewardprocessing and impulse-control mechanisms. Associations are emerging between screen dependency disorders such as Internet Addiction Disorder and specific neurogenetic polymorphisms, abnormal neural tissue and neural function. Although abnormal neural structural and functional characteristics may be a precondition rather than a consequence of addiction, there may also be a bidirectional relationship. As is the case with substance addictions, it is possible that intensive routine exposure to certain screen activities during critical stages of neural development may alter gene expression resulting in structural, synaptic and functional changes in the developing brain leading to screen dependency disorders, particularly in children with predisposing neurogenetic profiles. There may also be compound/secondary effects on neural development. Screen dependency disorders, even at subclinical levels, involve high levels of discretionary screen time, inducing greater child sedentary behaviour thereby reducing vital aerobic fitness, which plays an important role in the neurological health of children, particularly in brain structure and function. Child health policy must therefore adhere to the principle of precaution as a prudent approach to protecting child neurological integrity and well-being. This paper explains the basis of current paediatric neurological concerns surrounding screen dependency disorders and proposes preventive strategies for child neurology and allied professions.",
"title": ""
},
{
"docid": "a0bb908ff9c7cf14c34acfcdc47e4c1f",
"text": "DCF77 is a longwave radio transmitter located in Germany. Atomic clocks generate a 77.5-kHz carrier which is amplitudeand phase-modulated to broadcast the official time. The signal is used by industrial and consumer radio-controlled clocks. DCF77 faces competition from the Global Positioning System (GPS) which provides higher accuracy time. Still, DCF77 and other longwave time services worldwide remain popular because they allow indoor reception at lower cost, lower power, and sufficient accuracy. Indoor longwave reception is challenged by signal attenuation and electromagnetic interference from an increasing number of devices, particularly switched-mode power supplies. This paper introduces new receiver architectures and compares them with existing detectors and time decoders. Simulations and analytical calculations characterize the performance in terms of bit error rate and decoding probability, depending on input noise and narrowband interference. The most promising detector with maximum-likelihood time decoder displays the time in less than 60 s after powerup and at a noise level of Eb/N0 = 2.7 dB, an improvement of 20 dB over previous receivers. A field-programmable gate array-based demonstration receiver built for the purposes of this paper confirms the capabilities of these new algorithms. The findings of this paper enable future high-performance DCF77 receivers and further study of indoor longwave reception.",
"title": ""
},
{
"docid": "1a91e143f4430b11f3af242d6e07cbba",
"text": "Random graph matching refers to recovering the underlying vertex correspondence between two random graphs with correlated edges; a prominent example is when the two random graphs are given by Erdős-Rényi graphs G(n, d n ). This can be viewed as an average-case and noisy version of the graph isomorphism problem. Under this model, the maximum likelihood estimator is equivalent to solving the intractable quadratic assignment problem. This work develops an Õ(nd + n)-time algorithm which perfectly recovers the true vertex correspondence with high probability, provided that the average degree is at least d = Ω(log n) and the two graphs differ by at most δ = O(log−2(n)) fraction of edges. For dense graphs and sparse graphs, this can be improved to δ = O(log−2/3(n)) and δ = O(log−2(d)) respectively, both in polynomial time. The methodology is based on appropriately chosen distance statistics of the degree profiles (empirical distribution of the degrees of neighbors). Before this work, the best known result achieves δ = O(1) and n ≤ d ≤ n for some constant c with an n-time algorithm [BCL18] and δ = Õ((d/n)) and d = Ω̃(n) with a polynomial-time algorithm [DCKG18].",
"title": ""
},
{
"docid": "56ed1b2d57e2a76ce35f8ac93baf185e",
"text": "This study investigated the relationship between sprint start performance (5-m time) and strength and power variables. Thirty male athletes [height: 183.8 (6.8) cm, and mass: 90.6 (9.3) kg; mean (SD)] each completed six 10-m sprints from a standing start. Sprint times were recorded using a tethered running system and the force-time characteristics of the first ground contact were recorded using a recessed force plate. Three to six days later subjects completed three concentric jump squats, using a traditional and split technique, at a range of external loads from 30–70% of one repetition maximum (1RM). Mean (SD) braking impulse during acceleration was negligible [0.009 (0.007) N/s/kg) and showed no relationship with 5 m time; however, propulsive impulse was substantial [0.928 (0.102) N/s/kg] and significantly related to 5-m time (r=−0.64, P<0.001). Average and peak power were similar during the split squat [7.32 (1.34) and 17.10 (3.15) W/kg] and the traditional squat [7.07 (1.25) and 17.58 (2.85) W/kg], and both were significantly related to 5-m time (r=−0.64 to −0.68, P<0.001). Average power was maximal at all loads between 30% and 60% of 1RM for both squats. Split squat peak power was also maximal between 30% and 60% of 1RM; however, traditional squat peak power was maximal between 50% and 70% of 1RM. Concentric force development is critical to sprint start performance and accordingly maximal concentric jump power is related to sprint acceleration.",
"title": ""
},
{
"docid": "77362cc72d7a09dbbb0f067c11fe8087",
"text": "The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high-performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.",
"title": ""
},
{
"docid": "16f1b038f51e614da06ba84ebd175e14",
"text": "This paper explores how to extract argumentation-relevant information automatically from a corpus of legal decision documents, and how to build new arguments using that information. For decision texts, we use the Vaccine/Injury Project (V/IP) Corpus, which contains default-logic annotations of argument structure. We supplement this with presuppositional annotations about entities, events, and relations that play important roles in argumentation, and about the level of confidence that arguments would be successful. We then propose how to integrate these semantic-pragmatic annotations with syntactic and domain-general semantic annotations, such as those generated in the DeepQA architecture, and outline how to apply machine learning and scoring techniques similar to those used in the IBM Watson system for playing the Jeopardy! question-answer game. We replace this game-playing goal, however, with the goal of learning to construct legal arguments.",
"title": ""
},
{
"docid": "fb2028ca0e836452862a2cb1fa707d28",
"text": "State-of-the-art approaches for unsupervised keyphrase extraction are typically evaluated on a single dataset with a single parameter setting. Consequently, it is unclear how effective these approaches are on a new dataset from a different domain, and how sensitive they are to changes in parameter settings. To gain a better understanding of state-of-the-art unsupervised keyphrase extraction algorithms, we conduct a systematic evaluation and analysis of these algorithms on a variety of standard evaluation datasets.",
"title": ""
},
{
"docid": "ec492f3ca84546c84a9ee8e1992b1baf",
"text": "Sketch is an important media for human to communicate ideas, which reflects the superiority of human intelligence. Studies on sketch can be roughly summarized into recognition and generation. Existing models on image recognition failed to obtain satisfying performance on sketch classification. But for sketch generation, a recent study proposed a sequence-to-sequence variational-auto-encoder (VAE) model called sketch-rnn which was able to generate sketches based on human inputs. The model achieved amazing results when asked to learn one category of object, such as an animal or a vehicle. However, the performance dropped when multiple categories were fed into the model. Here, we proposed a model called sketch-pix2seq which could learn and draw multiple categories of sketches. Two modifications were made to improve the sketch-rnn model: one is to replace the bidirectional recurrent neural network (BRNN) encoder with a convolutional neural network(CNN); the other is to remove the Kullback-Leibler divergence from the objective function of VAE. Experimental results showed that models with CNN encoders outperformed those with RNN encoders in generating human-style sketches. Visualization of the latent space illustrated that the removal of KL-divergence made the encoder learn a posterior of latent space that reflected the features of different categories. Moreover, the combination of CNN encoder and removal of KL-divergence, i.e., the sketchpix2seq model, had better performance in learning and generating sketches of multiple categories and showed promising results in creativity tasks.",
"title": ""
},
{
"docid": "09b273c9e77f6fc1b2de20f50227c44d",
"text": "Age and gender are complementary soft biometric traits for face recognition. Successful estimation of age and gender from facial images taken under real-world conditions can contribute improving the identification results in the wild. In this study, in order to achieve robust age and gender classification in the wild, we have benefited from Deep Convolutional Neural Networks based representation. We have explored transferability of existing deep convolutional neural network (CNN) models for age and gender classification. The generic AlexNet-like architecture and domain specific VGG-Face CNN model are employed and fine-tuned with the Adience dataset prepared for age and gender classification in uncontrolled environments. In addition, task specific GilNet CNN model has also been utilized and used as a baseline method in order to compare with transferred models. Experimental results show that both transferred deep CNN models outperform the GilNet CNN model, which is the state-of-the-art age and gender classification approach on the Adience dataset, by an absolute increase of 7% and 4.5% in accuracy, respectively. This outcome indicates that transferring a deep CNN model can provide better classification performance than a task specific CNN model, which has a limited number of layers and trained from scratch using a limited amount of data as in the case of GilNet. Domain specific VGG-Face CNN model has been found to be more useful and provided better performance for both age and gender classification tasks, when compared with generic AlexNet-like model, which shows that transfering from a closer domain is more useful.",
"title": ""
},
{
"docid": "a267fadc2875fc16b69635d4592b03ae",
"text": "We investigated neural correlates of human visual orienting using event-related functional magnetic resonance imaging (fMRI). When subjects voluntarily directed attention to a peripheral location, we recorded robust and sustained signals uniquely from the intraparietal sulcus (IPs) and superior frontal cortex (near the frontal eye field, FEF). In the ventral IPs and FEF only, the blood oxygen level dependent signal was modulated by the direction of attention. The IPs and FEF also maintained the most sustained level of activation during a 7-sec delay, when subjects maintained attention at the peripheral cued location (working memory). Therefore, the IPs and FEF form a dorsal network that controls the endogenous allocation and maintenance of visuospatial attention. A separate right hemisphere network was activated by the detection of targets at unattended locations. Activation was largely independent of the target's location (visual field). This network included among other regions the right temporo-parietal junction and the inferior frontal gyrus. We propose that this cortical network is important for reorienting to sensory events.",
"title": ""
},
{
"docid": "87c7875416503ab1f12de90a597959a4",
"text": "Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes.",
"title": ""
},
{
"docid": "f48e6475c0afeac09262cdc2f5681208",
"text": "Semantic analysis of sport sequences requires camera calibration to obtain player and ball positions in real-world coordinates. For court sports like tennis, the marker lines on the field can be used to determine the calibration parameters. We propose a real-time calibration algorithm that can be applied to all court sports simply by exchanging the court model. The algorithm is based on (1) a specialized court-line detector, (2) a RANSAC-based line parameter estimation, (3) a combinatorial optimization step to localize the court within the set of detected line segments, and (4) an iterative court-model tracking step. Our results show real-time calibration of, e.g., tennis and soccer sequences with a computation time of only about 6 ms per frame.",
"title": ""
},
{
"docid": "16156f3f821fe6d65c8a753995f50b18",
"text": "Memory over commitment enables cloud providers to host more virtual machines on a single physical server, exploiting spare CPU and I/O capacity when physical memory becomes the bottleneck for virtual machine deployment. However, over commiting memory can also cause noticeable application performance degradation. We present Ginkgo, a policy framework for over omitting memory in an informed and automated fashion. By directly correlating application-level performance to memory, Ginkgo automates the redistribution of scarce memory across all virtual machines, satisfying performance and capacity constraints. Ginkgo also achieves memory gains for traditionally fixed-size Java applications by coordinating the redistribution of available memory with the activities of the Java Virtual Machine heap. When compared to a non-over commited system, Ginkgo runs the Day Trader 2.0 and SPEC Web 2009 benchmarks with the same number of virtual machines while saving up to 73% (50% omitting free space) of a physical server's memory while keeping application performance degradation within 7%.",
"title": ""
},
{
"docid": "7bf0e8eb4a70abcc36d101a300e4a03c",
"text": "In this paper, we present a new algorithm that automatically classifies wandering patterns (or behaviors) of patients with Alzheimer's disease and other different types of dementia. Experimental results on a real-life dataset show that this algorithm can provide a robust and credible assistive technology for monitoring patients with dementia (PWD) who are prone to wandering. Combined with indoor and outdoor location technologies using ubiquitous devices such as smart phones, we also demonstrate the feasibility of a remote mobile healthcare monitoring solution that is capable of reasoning about wandering behaviors of PWD and real-time detection of abnormalities that require timely intervention from caregivers.",
"title": ""
}
] |
scidocsrr
|
5cc337f41627bc1304d5178cc34efebd
|
Combining self-supervised learning and imitation for vision-based rope manipulation
|
[
{
"docid": "57f2b164538adcd242f66b80d4218cef",
"text": "Suturing is an important yet time-consuming part of surgery. A fast and robust autonomous procedure could reduce surgeon fatigue, and shorten operation times. It could also be of particular importance for suturing in remote tele-surgery settings where latency can complicate the master-slave mode control that is the current practice for robotic surgery with systems like the da Vinci®. We study the applicability of the trajectory transfer algorithm proposed in [12] to the automation of suturing. The core idea of this procedure is to first use non-rigid registration to find a 3D warping function which maps the demonstration scene onto the test scene, then use this warping function to transform the robot end-effector trajectory. Finally a robot joint trajectory is generated by solving a trajectory optimization problem that attempts to find the closest feasible trajectory, accounting for external constraints, such as joint limits and obstacles. Our experiments investigate generalization from a single demonstration to differing initial conditions. A first set of experiments considers the problem of having a simulated Raven II system [5] suture two flaps of tissue together. A second set of experiments considers a PR2 robot performing sutures in a scaled-up experimental setup. The simulation experiments were fully autonomous. For the real-world experiments we provided human input to assist with the detection of landmarks to be fed into the registration algorithm. The success rate for learning from a single demonstration is high for moderate perturbations from the demonstration's initial conditions, and it gradually decreases for larger perturbations.",
"title": ""
}
] |
[
{
"docid": "d83f34978bd6dd72131c36f8adb34850",
"text": "Images in social networks share different destinies: some are going to become popular while others are going to be completely unnoticed. In this paper we propose to use visual sentiment features together with three novel context features to predict a concise popularity score of social images. Experiments on large scale datasets show the benefits of proposed features on the performance of image popularity prediction. Exploiting state-of-the-art sentiment features, we report a qualitative analysis of which sentiments seem to be related to good or poor popularity. To the best of our knowledge, this is the first work understanding specific visual sentiments that positively or negatively influence the eventual popularity of images.",
"title": ""
},
{
"docid": "9677d364752d50160557bd8e9dfa0dfb",
"text": "a Junior Research Group of Primate Sexual Selection, Department of Reproductive Biology, German Primate Center Courant Research Center ‘Evolution of Social Behavior’, Georg-August-Universität, Germany c Junior Research Group of Primate Kin Selection, Department of Primatology, Max-Planck-Institute for Evolutionary Anthropology, Germany d Institute of Biology, Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Germany e Faculty of Veterinary Medicine, Bogor Agricultural University, Indonesia",
"title": ""
},
{
"docid": "882f463d187854967709c95ecd1d2fc1",
"text": "In this paper, we propose a zoom-out-and-in network for generating object proposals. We utilize different resolutions of feature maps in the network to detect object instances of various sizes. Specifically, we divide the anchor candidates into three clusters based on the scale size and place them on feature maps of distinct strides to detect small, medium and large objects, respectively. Deeper feature maps contain region-level semantics which can help shallow counterparts to identify small objects. Therefore we design a zoom-in sub-network to increase the resolution of high level features via a deconvolution operation. The high-level features with high resolution are then combined and merged with low-level features to detect objects. Furthermore, we devise a recursive training pipeline to consecutively regress region proposals at the training stage in order to match the iterative regression at the testing stage. We demonstrate the effectiveness of the proposed method on ILSVRC DET and MS COCO datasets, where our algorithm performs better than the state-of-the-arts in various evaluation metrics. It also increases average precision by around 2% in the detection system.",
"title": ""
},
{
"docid": "66435d5b38f460edf7781372cd4e125b",
"text": "Network Function Virtualization (NFV) is emerging as a new paradigm for providing elastic network functions through flexible virtual network function (VNF) instances executed on virtualized computing platforms exemplified by cloud datacenters. In the new NFV market, well defined VNF instances each realize an atomic function that can be chained to meet user demands in practice. This work studies the dynamic market mechanism design for the transaction of VNF service chains in the NFV market, to help relinquish the full power of NFV. Combining the techniques of primal-dual approximation algorithm design with Myerson's characterization of truthful mechanisms, we design a VNF chain auction that runs efficiently in polynomial time, guarantees truthfulness, and achieves near-optimal social welfare in the NFV eco-system. Extensive simulation studies verify the efficacy of our auction mechanism.",
"title": ""
},
{
"docid": "b4f3dc8134b9c04e60fba8a0fda70545",
"text": "Many important applications – from big data analytics to information retrieval, gene expression analysis, and numerical weather prediction – require the solution of large dense singular value decompositions (SVD). In many cases the problems are too large to fit into the computer’s main memory, and thus require specialized out-of-core algorithms that use disk storage. In this paper, we analyze the SVD communications, as related to hierarchical memories, and design a class of algorithms that minimizes them. This class includes out-of-core SVDs but can also be applied between other consecutive levels of the memory hierarchy, e.g., GPU SVD using the CPU memory for large problems. We call these out-of-memory (OOM) algorithms. To design OOM SVDs, we first study the communications for both classical one-stage blocked SVD and two-stage tiled SVD. We present the theoretical analysis and strategies to design, as well as implement, these communication avoiding OOM SVD algorithms. We show performance results for multicore architecture that illustrate our theoretical findings and match our performance models.",
"title": ""
},
{
"docid": "c9a6fb06acb9e33a607c7f183ff6a626",
"text": "The objective of the study was to examine the correlations between intracranial aneurysm morphology and wall shear stress (WSS) to identify reliable predictors of rupture risk. Seventy-two intracranial aneurysms (41 ruptured and 31 unruptured) from 63 patients were studied retrospectively. All aneurysms were divided into two categories: narrow (aspect ratio ≥1.4) and wide-necked (aspect ratio <1.4 or neck width ≥4 mm). Computational fluid dynamics was used to determine the distribution of WSS, which was analyzed between different morphological groups and between ruptured and unruptured aneurysms. Sections of the walls of clipped aneurysms were stained with hematoxylin–eosin, observed under a microscope, and photographed. Ruptured aneurysms were statistically more likely to have a greater low WSS area ratio (LSAR) (P = 0.001) and higher aneurysms parent WSS ratio (P = 0.026) than unruptured aneurysms. Narrow-necked aneurysms were statistically more likely to have a larger LSAR (P < 0.001) and lower values of MWSS (P < 0.001), mean aneurysm-parent WSS ratio (P < 0.001), HWSS (P = 0.012), and the highest aneurysm-parent WSS ratio (P < 0.001) than wide-necked aneurysms. The aneurysm wall showed two different pathological changes associated with high or low WSS in wide-necked aneurysms. Aneurysm morphology could affect the distribution and magnitude of WSS on the basis of differences in blood flow. Both high and low WSS could contribute to focal wall damage and rupture through different mechanisms associated with each morphological type.",
"title": ""
},
{
"docid": "3a8f14166954036f85914183dd7a7ee4",
"text": "Abused and nonabused child witnesses to parental violence temporarily residing in a battered women's shelter were compared to children from a similar economic background on measures of self-esteem, anxiety, depression, and behavior problems, using mothers' and self-reports. Results indicated significantly more distress in the abused-witness children than in the comparison group, with nonabused witness children's scores falling between the two. Age of child and types of violence were mediating factors. Implications of the findings are discussed.",
"title": ""
},
{
"docid": "33b37422ace8a300d53d4896de6bbb6f",
"text": "Digital investigations of the real world through point clouds and derivatives are changing how curators, cultural heritage researchers and archaeologists work and collaborate. To progressively aggregate expertise and enhance the working proficiency of all professionals, virtual reconstructions demand adapted tools to facilitate knowledge dissemination. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. In this paper, we review the state of the art of point cloud integration within archaeological applications, giving an overview of 3D technologies for heritage, digital exploitation and case studies showing the assimilation status within 3D GIS. Identified issues and new perspectives are addressed through a knowledge-based point cloud processing framework for multi-sensory data, and illustrated on mosaics and quasi-planar objects. A new acquisition, pre-processing, segmentation and ontology-based classification method on hybrid point clouds from both terrestrial laser scanning and dense image matching is proposed to enable reasoning for information extraction. Experiments in detection and semantic enrichment show promising results of 94% correct semantization. Then, we integrate the metadata in an archaeological smart point cloud data structure allowing spatio-semantic queries related to CIDOC-CRM. Finally, a WebGL prototype is presented that leads to efficient communication between actors by proposing optimal 3D data visualizations as a basis on which interaction can grow.",
"title": ""
},
{
"docid": "d7f349fd58c2d00acc29e5efdbea7073",
"text": "Digit ratio (2D:4D), a putative correlate of prenatal testosterone, has been found to relate to performance in sport and athletics such that low 2D:4D (high prenatal testosterone) correlates with high performance. Speed in endurance races is strongly related to 2D:4D, and may be one factor that underlies the link between sport and 2D:4D, but nothing is known of the relationship between 2D:4D and sprinting speed. Here we show that running times over 50 m were positively correlated with 2D:4D in a sample of 241 boys (i.e. runners with low 2D:4D ran faster than runners with high 2D:4D). The relationship was also found for 50 m split times (at 20, 30, and 40 m) and was independent of age, BMI, and an index of maturity. However, associations between 2D:4D and sprinting speed were much weaker than those reported for endurance running. This suggests that 2D:4D is a relatively weak predictor of strength and a stronger predictor of efficiency in aerobic exercise. We discuss the effect sizes for relationships between 2D:4D and sport and target traits in general, and identify areas of strength and weakness in digit ratio research.",
"title": ""
},
{
"docid": "05eaf278ed39cd6a8522f812589388c6",
"text": "Several recent software systems have been designed to obtain novel annotation of cross-referencing text fragments and Wikipedia pages. Tagme is state of the art in this setting and can accurately manage short textual fragments (such as snippets of search engine results, tweets, news, or blogs) on the fly.",
"title": ""
},
{
"docid": "83ed915556df1c00f6448a38fb3b7ec3",
"text": "Wandering liver or hepatoptosis is a rare entity in medical practice. It is also known as floating liver and hepatocolonic vagrancy. It describes the unusual finding of, usually through radiology, the alternate appearance of the liver on the right and left side, respectively. . The first documented case of wandering liver was presented by Heister in 1754 Two centuries later In 1958, Grayson recognized and described the association of wandering liver and tachycardia. In his paper, Grayson details the classical description of wandering liver documented by French in his index of differential diagnosis. In 2010 Jan F. Svensson et al described the first report of a wandering liver in a neonate, reviewed and a discussed the possible treatment strategies. When only displaced, it may wrongly be thought to be enlarged liver",
"title": ""
},
{
"docid": "82e78a0e89a5fe7ca4465af9d7a4dc3e",
"text": "While Six Sigma is increasingly implemented in industry, little academic research has been done on Six Sigma and its influence on quality management theory and application. There is a criticism that Six Sigma simply puts traditional quality management practices in a new package. To investigate this issue and the role of Six Sigma in quality management, this study reviewed both the traditional quality management and Six Sigma literatures and identified three new practices that are critical for implementing Six Sigma’s concept and method in an organization. These practices are referred to as: Six Sigma role structure, Six Sigma structured improvement procedure, and Six Sigma focus on metrics. A research model and survey instrument were developed to investigate how these Six Sigma practices integrate with seven traditional quality management practices to affect quality performance and business performance. Test results based on a sample of 226 US manufacturing plants revealed that the three Six Sigma practices are distinct practices from traditional quality management practices, and that they complement the traditional quality management practices in improving performance. The implications of the findings for researchers and practitioners are discussed and further research directions are offered. # 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7265c5e3f64b0a19592e7b475649433c",
"text": "A power transformer outage has a dramatic financial consequence not only for electric power systems utilities but also for interconnected customers. The service reliability of this important asset largely depends upon the condition of the oil-paper insulation. Therefore, by keeping the qualities of oil-paper insulation system in pristine condition, the maintenance planners can reduce the decline rate of internal faults. Accurate diagnostic methods for analyzing the condition of transformers are therefore essential. Currently, there are various electrical and physicochemical diagnostic techniques available for insulation condition monitoring of power transformers. This paper is aimed at the description, analysis and interpretation of modern physicochemical diagnostics techniques for assessing insulation condition in aged transformers. Since fields and laboratory experiences have shown that transformer oil contains about 70% of diagnostic information, the physicochemical analyses of oil samples can therefore be extremely useful in monitoring the condition of power transformers.",
"title": ""
},
{
"docid": "f811ec2ab6ce7e279e97241dc65de2a5",
"text": "Summary Kraljic's purchasing portfolio approach has inspired many academic writers to undertake further research into purchasing portfolio models. Although it is evident that power and dependence issues play an important role in the Kraljic matrix, scant quantitative research has been undertaken in this respect. In our study we have filled this gap by proposing quantitative measures for ‘relative power’ and ‘total interdependence’. By undertaking a comprehensive survey among Dutch purchasing professionals, we have empirically quantified ‘relative power’ and ‘total interdependence’ for each quadrant of the Kraljic portfolio matrix. We have compared theoretical expectations on power and dependence levels with our empirical findings. A remarkable finding is the observed supplier dominance in the strategic quadrant of the Kraljic matrix. This indicates that the supplier dominates even satisfactory partnerships. In the light of this finding future research cannot assume any longer that buyersupplier relationships in the strategic quadrant of the Kraljic matrix are necessarily characterised by symmetric power. 1 Marjolein C.J. Caniëls, Open University of the Netherlands (OUNL), Faculty of Management Sciences (MW), P.O. Box 2960, 6401 DL Heerlen, the Netherlands. Tel: +31 45 5762724; Fax: +31 45 5762103 E-mail: [email protected] 2 Cees J. Gelderman, Open University of the Netherlands (OUNL), Faculty of Management Sciences (MW) P.O. Box 2960, 6401 DL Heerlen, the Netherlands. Tel: +31 45 5762590; Fax: +31 45 5762103 E-mail: [email protected]",
"title": ""
},
{
"docid": "52ebf28afd8ae56816fb81c19e8890b6",
"text": "In this paper we aim to model the relationship between the text of a political blog post and the comment volume—that is, the total amount of response—that a post will receive. We seek to accurately identify which posts will attract a high-volume response, and also to gain insight about the community of readers and their interests. We design and evaluate variations on a latentvariable topic model that links text to comment volume. Introduction What makes a blog post noteworthy? One measure of the popularity or breadth of interest of a blog post is the extent to which readers of the blog are inspired to leave comments on the post. In this paper, we study the relationship between the text contents of a blog post and the volume of response it will receive from blog readers. Modeling this relationship has the potential to reveal the interests of a blog’s readership community to its authors, readers, advertisers, and scientists studying the blogosphere, but it may also be useful in improving technologies for blog search, recommendation, summarization, and so on. There are many ways to define “popularity” in blogging. In this study, we focus exclusively on the aggregate volume of comments. Commenting is an important activity in the political blogosphere, giving a blog site the potential to become a discussion forum. For a given blog post, we treat comment volume as a target output variable, and use generative probabilistic models to learn from past data the relationship between a blog post’s text contents and its comment volume. While many clues might be useful in predicting comment volume (e.g., the post’s author, the time the post appears, the length of the post, etc.) here we focus solely on the text contents of the post. We first describe the data and experimental framework, including a simple baseline. We then explore how latentvariable topic models can be used to make better predictions about comment volume. These models reveal that part of the variation in comment volume can be explained by the topic of the blog post, and elucidate the relative degrees to which readers find each topic comment-worthy. ∗The authors acknowledge research support from HP Labs and helpful comments from the reviewers and Jacob Eisenstein. Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Predicting Comment Volume Our goal is to predict some measure of the volume of comments on a new blog post.1 Volume might be measured as the number of words in the comment section, the number of comments, the number of distinct users who leave comments, or a variety of other ways. Any of these can be affected by uninteresting factors—the time of day the post appears, a side conversation, a surge in spammer activity—but these quantities are easily measured. In research on blog data, comments are often ignored, and it is easy to see why: comments are very noisy, full of non-standard grammar and spelling, usually unedited, often cryptic and uninformative, at least to those outside the blog’s community. A few studies have focused on information in comments. Mishe and Glance (2006) showed the value of comments in characterizing the social repercussions of a post, including popularity and controversy. Their largescale user study correlated popularity and comment activity. Yano et al. (2009) sought to predict which members of blog’s community would leave comments, and in some cases used the text contents of the comments themselves to discover topics related to both words and user comment behavior. This work is similar, but we seek to predict the aggregate behavior of the blog post’s readers: given a new blog post, how much will the community comment on it?",
"title": ""
},
{
"docid": "bceb9f8cc1726017e564c6474618a238",
"text": "The modulators are the basic requirement of communication systems they are designed to reduce the channel distortion & to use in RF communication hence many type of carrier modulation techniques has been already proposed according to channel properties & data rate of the system. QPSK (Quadrature Phase Shift Keying) is one of the modulation schemes used in wireless communication system due to its ability to transmit twice the data rate for a given bandwidth. The QPSK is the most often used scheme since it does not suffer from BER (Bit Error rate) degradation while the bandwidth efficiency is increased. It is very popular in Satellite communication. As the design of complex mathematical models such as QPSK modulator in „pure HDL‟ is very difficult and costly; it requires from designer many additional skills and is time-consuming. To overcome these types of difficulties, the proposed QPSK modulator can be implemented on FPGA by using the concept of hardware co-simulation at Low power. In this process, QPSK modulator is simulated with Xilinx System Generator Simulink software and later on it is converted in Very high speed integrated circuit Hardware Descriptive Language to implement it on FPGA. Along with the co-simulation, power of the proposed QPSK modulator can be minimized than conventional QPSK modulator. As a conclusion, the proposed architecture will not only able to operate on co-simulation platform but at the same time it will significantly consume less operational power.",
"title": ""
},
{
"docid": "6e837f73398e1f2da537b31d5a696ec6",
"text": "With the development of high computational devices, deep neural networks (DNNs), in recent years, have gained significant popularity in many Artificial Intelligence (AI) applications. However, previous efforts have shown that DNNs were vulnerable to strategically modified samples, named adversarial examples. These samples are generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Inspired by the popularity of generating adversarial examples for image DNNs, research efforts on attacking DNNs for textual applications emerges in recent years. However, existing perturbation methods for images cannot be directly applied to texts as text data is discrete. In this article, we review research works that address this difference and generate textual adversarial examples on DNNs. We collect, select, summarize, discuss and analyze these works in a comprehensive way and cover all the related information to make the article self-contained. Finally, drawing on the reviewed literature, we provide further discussions and suggestions on this topic.",
"title": ""
},
{
"docid": "d0fc352e347f7df09140068a4195eb9e",
"text": "A wave of alternative coins that can be effectively mined without specialized hardware, and a surge in cryptocurrencies' market value has led to the development of cryptocurrency mining ( cryptomining ) services, such as Coinhive, which can be easily integrated into websites to monetize the computational power of their visitors. While legitimate website operators are exploring these services as an alternative to advertisements, they have also drawn the attention of cybercriminals: drive-by mining (also known as cryptojacking ) is a new web-based attack, in which an infected website secretly executes JavaScript code and/or a WebAssembly module in the user's browser to mine cryptocurrencies without her consent. In this paper, we perform a comprehensive analysis on Alexa's Top 1 Million websites to shed light on the prevalence and profitability of this attack. We study the websites affected by drive-by mining to understand the techniques being used to evade detection, and the latest web technologies being exploited to efficiently mine cryptocurrency. As a result of our study, which covers 28 Coinhive-like services that are widely being used by drive-by mining websites, we identified 20 active cryptomining campaigns. Motivated by our findings, we investigate possible countermeasures against this type of attack. We discuss how current blacklisting approaches and heuristics based on CPU usage are insufficient, and present MineSweeper, a novel detection technique that is based on the intrinsic characteristics of cryptomining code, and, thus, is resilient to obfuscation. Our approach could be integrated into browsers to warn users about silent cryptomining when visiting websites that do not ask for their consent.",
"title": ""
},
{
"docid": "60b21a7b9f0f52f48ae2830db600fa24",
"text": "The multi-armed bandit problem for a gambler is to decide which arm of a K-slot machine to pull to maximize his total reward in a series of trials. Many real-world learning and optimization problems can be modeled in this way. Several strategies or algorithms have been proposed as a solution to this problem in the last two decades, but, to our knowledge, there has been no common evaluation of these algorithms. This paper provides a preliminary empirical evaluation of several multiarmed bandit algorithms. It also describes and analyzes a new algorithm, Poker (Price Of Knowledge and Estimated Reward) whose performance compares favorably to that of other existing algorithms in several experiments. One remarkable outcome of our experiments is that the most naive approach, the -greedy strategy, proves to be often hard to beat.",
"title": ""
},
{
"docid": "7ead5f6b374024f5153fe6f4db18a64d",
"text": "Smart mobile device usage has expanded at a very high rate all over the world. Since the mobile devices nowadays are used for a wide variety of application areas like personal communication, data storage and entertainment, security threats emerge, comparable to those which a conventional PC is exposed to. Mobile malware has been growing in scale and complexity as smartphone usage continues to rise. Android has surpassed other mobile platforms as the most popular whilst also witnessing a dramatic increase in malware targeting the platform. In this work, we have considered Android based malware for analysis and a scalable detection mechanism is designed using multifeature collaborative decision fusion (MCDF). The different features of a malicious file like the permission based features and the API call based features are considered in order to provide a better detection by training an ensemble of classifiers and combining their decisions using collaborative approach based on probability theory. The performance of the proposed model is evaluated on a collection of Android based malware comprising of different malware families and the results show that our approach give a better performance than state-of-the-art ensemble schemes available. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
8777b45cbab42de2f448b452efdaf6bc
|
End-to-End Optimization of Task-Oriented Dialogue Model with Deep Reinforcement Learning
|
[
{
"docid": "43e9fbaedf062a67be3c51b99889a6fb",
"text": "A partially observable Markov decision process has been proposed as a dialogue model that enables robustness to speech recognition errors and automatic policy optimisation using reinforcement learning (RL). However, conventional RL algorithms require a very large number of dialogues, necessitating a user simulator. Recently, Gaussian processes have been shown to substantially speed up the optimisation, making it possible to learn directly from interaction with human users. However, early studies have been limited to very low dimensional spaces and the learning has exhibited convergence problems. Here we investigate learning from human interaction using the Bayesian Update of Dialogue State system. This dynamic Bayesian network based system has an optimisation space covering more than one hundred features, allowing a wide range of behaviours to be learned. Using an improved policy model and a more robust reward function, we show that stable learning can be achieved that significantly outperforms a simulator trained policy.",
"title": ""
}
] |
[
{
"docid": "9fd2ec184fa051070466f61845e6df60",
"text": "Buildings across the world contribute significantly to the overall energy consumption and are thus stakeholders in grid operations. Towards the development of a smart grid, utilities and governments across the world are encouraging smart meter deployments. High resolution (often at every 15 minutes) data from these smart meters can be used to understand and optimize energy consumptions in buildings. In addition to smart meters, buildings are also increasingly managed with Building Management Systems (BMS) which control different sub-systems such as lighting and heating, ventilation, and air conditioning (HVAC). With the advent of these smart meters, increased usage of BMS and easy availability and widespread installation of ambient sensors, there is a deluge of building energy data. This data has been leveraged for a variety of applications such as demand response, appliance fault detection and optimizing HVAC schedules. Beyond the traditional use of such data sets, they can be put to effective use towards making buildings smarter and hence driving every possible bit of energy efficiency. Effective use of this data entails several critical areas from sensing to decision making and participatory involvement of occupants. Picking from wide literature in building energy efficiency, we identify five crust areas (also referred to as 5 Is) for realizing data driven energy efficiency in buildings : i) instrument optimally; ii) interconnect sub-systems; iii) inferred decision making; iv) involve occupants and v) intelligent operations. We classify prior work as per these 5 Is and discuss challenges, opportunities and applications across them. Building upon these 5 Is we discuss a well studied problem in building energy efficiency non-intrusive load monitoring (NILM) and how research in this area spans across the 5 Is.",
"title": ""
},
{
"docid": "2e0941f7874ce5372927544791a81a2e",
"text": "This project has as an objective of the extraction of humans in the foreground of image by creating a trimap which combines a depth map analysis and the Chromakey technique. The trimap is generated automatically, differing from the manual implementations which require user interaction. The extraction is based on extra information deriving from a structured lighting device (Kinect) integrated with a high resolution camera. With the junction of the monochromatic Kinect camera and the high definition camera, the results so far have been more expressive than only using the RGB and monochromatic cameras from the Kinect.",
"title": ""
},
{
"docid": "cf0973341eff0b944403c2e0d707ccf8",
"text": "Recommender systems provide suggestions for products, services, or information that match users’ interests and/or needs. However, not all recommendations persuade users to select or use the recommended item. The Elaboration Likelihood Model (ELM) suggests that individuals with low motivation or ability to process the information provided with a recommended item could eventually get persuaded to select/use the item if appropriate peripheral cues enrich the recommendation. The purpose of this research is to investigate the persuasive effect of certain influence strategies and the role of personality in the acceptance of recommendations. In the present study, a movie Recommender System was developed in order to empirically investigate the aforementioned questions applying certain persuasive strategies in the form of textual messages alongside the recommended item. The statistical method of Fuzzy-Set Qualitative Comparative Analysis (fsQCA) was used for data analysis and the results revealed that motivating messages do change users’ acceptance of the recommender item but not unconditionally since user’s personality differentiates the effect of the persuasive strategies.",
"title": ""
},
{
"docid": "eef1e51e4127ed481254f97963496f48",
"text": "-Vehicular ad hoc networks (VANETs) are wireless networks that do not require any fixed infrastructure. Regarding traffic safety applications for VANETs, warning messages have to be quickly and smartly disseminated in order to reduce the required dissemination time and to increase the number of vehicles receiving the traffic warning information. Adaptive techniques for VANETs usually consider features related to the vehicles in the scenario, such as their density, speed, and position, to adapt the performance of the dissemination process. These approaches are not useful when trying to warn the highest number of vehicles about dangerous situations in realistic vehicular environments. The Profile-driven Adaptive Warning Dissemination Scheme (PAWDS) designed to improve the warning message dissemination process. PAWDS system that dynamically modifies some of the key parameters of the propagation process and it cannot detect the vehicles which are in the dangerous position. Proposed system identifies the vehicles which are in the dangerous position and to send warning messages immediately. The vehicles must make use of all the available information efficiently to predict the position of nearby vehicles. Keywords— PAWDS, VANET, Ad hoc network , OBU , RSU, GPS.",
"title": ""
},
{
"docid": "45fa14d25180fc08ee162efdb3478188",
"text": "Acupuncture has a reputation among the public of being safe. Although recently performed prospective studies on the frequency of adverse effects of acupuncture found no severe complication, since 1965 many case reports of serious or even life-threatening incidents caused by acupuncture have appeared in the scientific literature. The most frequently reported complications are pneumothorax and lesions of the spinal cord. Severe injuries of peripheral nerves and blood vessels due to acupuncture seem to be very rare. Although case reports do not produce reliable data on the frequency of adverse events. information on sources of application errors can be extracted to increase the quality of acupuncture in education and therapy. All traumatic injuries described in this article could be avoided if practitioners had better anatomical knowledge, applied existing anatomical knowledge better, or both.",
"title": ""
},
{
"docid": "af8ddd6792a98ea3b59bdaab7c7fa045",
"text": "This research explores the alternative media ecosystem through a Twitter lens. Over a ten-month period, we collected tweets related to alternative narratives—e.g. conspiracy theories—of mass shooting events. We utilized tweeted URLs to generate a domain network, connecting domains shared by the same user, then conducted qualitative analysis to understand the nature of different domains and how they connect to each other. Our findings demonstrate how alternative news sites propagate and shape alternative narratives, while mainstream media deny them. We explain how political leanings of alternative news sites do not align well with a U.S. left-right spectrum, but instead feature an antiglobalist (vs. globalist) orientation where U.S. Alt-Right sites look similar to U.S. Alt-Left sites. Our findings describe a subsection of the emerging alternative media ecosystem and provide insight in how websites that promote conspiracy theories and pseudo-science may function to conduct underlying political agendas.",
"title": ""
},
{
"docid": "64ec8a9073308280740c96fb0c8b4617",
"text": "Lifting is a common manual material handling task performed in the workplaces. It is considered as one of the main risk factors for Work-related Musculoskeletal Disorders. To improve work place safety, it is necessary to assess musculoskeletal and biomechanical risk exposures associated with these tasks, which requires very accurate 3D pose. Existing approaches mainly utilize marker-based sensors to collect 3D information. However, these methods are usually expensive to setup, timeconsuming in process, and sensitive to the surrounding environment. In this study, we propose a multi-view based deep perceptron approach to address aforementioned limitations. Our approach consists of two modules: a \"view-specific perceptron\" network extracts rich information independently from the image of view, which includes both 2D shape and hierarchical texture information; while a \"multi-view integration\" network synthesizes information from all available views to predict accurate 3D pose. To fully evaluate our approach, we carried out comprehensive experiments to compare different variants of our design. The results prove that our approach achieves comparable performance with former marker-based methods, i.e. an average error of 14:72 ± 2:96 mm on the lifting dataset. The results are also compared with state-of-the-art methods on HumanEva- I dataset [1], which demonstrates the superior performance of our approach.",
"title": ""
},
{
"docid": "071d2d56b4516dc77fb70fcefb999fa0",
"text": "Boiling heat transfer occurs in many situations and can be used for thermal management in various engineered systems with high energy density, from power electronics to heat exchangers in power plants and nuclear reactors. Essentially, boiling is a complex physical process that involves interactions between heating surface, liquid, and vapor. For engineering applications, the boiling heat transfer is usually predicted by empirical correlations or semi-empirical models, which has relatively large uncertainty. In this paper, a data-driven approach based on deep feedforward neural networks is studied. The proposed networks use near wall local features to predict the boiling heat transfer. The inputs of networks include the local momentum and energy convective transport, pressure gradients, turbulent viscosity, and surface information. The outputs of the networks are the quantities of interest of a typical boiling system, including heat transfer components, wall superheat, and near wall void fraction. The networks are trained by the high-fidelity data processed from first principle simulation of pool boiling under varying input heat fluxes. State-of-the-art algorithms are applied to prevent the overfitting issue when training the deep networks. The trained networks are tested in interpolation cases and extrapolation cases which both demonstrate good agreement with the original high-fidelity simulation results.",
"title": ""
},
{
"docid": "947e9df94968783869711b76ce64359a",
"text": "Terrain surveying using unmanned aerial vehicles (UAV) is being applied to many areas such as precision agriculture or battlefield surveillance. A core problem of the surveying task is the coverage path planning problem (CPP); it is defined as the task of determining a path for an unmanned aerial vehicle so that it observes all points of a target area. State-of-the-art planners solve the problem for a unique region. However, in some operations, for example search and rescue, it is important to plan a path that covers more than a single area. We propose a method for solving the CPP for disjoint areas. The general problem is modeled as a rural postman problem which has been demonstrated to be NP-Hard. Our solution relies on dividing the problem into two steps: optimization of the visiting order and optimization of the flight lines orientation. The method is validated through several simulations using real parameters. In addition, it is fast enough for being implemented onboard.",
"title": ""
},
{
"docid": "7755e8c9234f950d0d5449602269e34b",
"text": "In this paper we describe a privacy-preserving method for commissioning an IoT device into a cloud ecosystem. The commissioning consists of the device proving its manufacturing provenance in an anonymous fashion without reliance on a trusted third party, and for the device to be anonymously registered through the use of a blockchain system. We introduce the ChainAnchor architecture that provides device commissioning in a privacy-preserving fashion. The goal of ChainAnchor is (i) to support anonymous device commissioning, (ii) to support device-owners being remunerated for selling their device sensor-data to service providers, and (iii) to incentivize device-owners and service providers to share sensor-data in a privacy-preserving manner.",
"title": ""
},
{
"docid": "ce3d81c74ef3918222ad7d2e2408bdb0",
"text": "This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination. Research in this area uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.\nA key insight of the framework presented here is that coordination can be seen as the process of managing dependencies among activities. Further progress, therefore, should be possible by characterizing different kinds of dependencies and identifying the coordination processes that can be used to manage them. A variety of processes are analyzed from this perspective, and commonalities across disciplines are identified. Processes analyzed include those for managing shared resources, producer/consumer relationships, simultaneity constraints, and task/subtask dependencies.\nSection 3 summarizes ways of applying a coordination perspective in three different domains:(1) understanding the effects of information technology on human organizations and markets, (2) designing cooperative work tools, and (3) designing distributed and parallel computer systems. In the final section, elements of a research agenda in this new area are briefly outlined.",
"title": ""
},
{
"docid": "c89be5ba71893ce4bca2b941c1f4a21a",
"text": "Accurate indoor human localization without requiring any pre-installed infrastructure is essential for many applications, such as search and rescue in fire disaster areas or human social interaction. Ultra-wideband (UWB) is a very promising technology for accurate indoor positioning with pre-installed receivers. An infrastructure-free methodology, called Pedestrian Dead-Reckoning (PDR), which uses an inertial measurement unit (IMU), can also be used for position estimation. In this approach, the drift errors of IMU in each step length estimation are compensated based on zero-velocity update (ZUPT), zero angular rate update (ZARU) and heuristic heading drift reduction (HDR) algorithms. An accurate step detection can be achieved by relying on the data provided by accelerometers and gyroscopes. In order to further improve the accuracy, a novel approach, which combines IMU PDR and UWB ranging measurements by Extended Kalman filter (EKF) without any pre-installed infrastructure, is proposed. All the components in this approach, the IMU, the mobile station (MS) and the receiver of the UWB are mounted on the feet. The biases in the IMU measurements, which cause inaccurate step length estimation, can be compensated by range measurements provided by UWB. The performance of the normal PDR with EKF is evaluated as comparison to the proposed approach. The real test results show that the proposed approach with EKF is the most effective way to reduce the error.",
"title": ""
},
{
"docid": "597bfef473a39b5bf2890a2a697e5c26",
"text": "Ripple is a payment system and a digital currency which evolved completely independently of Bitcoin. Although Ripple holds the second highest market cap after Bitcoin, there are surprisingly no studies which analyze the provisions of Ripple. In this paper, we study the current deployment of the Ripple payment system. For that purpose, we overview the Ripple protocol and outline its security and privacy provisions in relation to the Bitcoin system. We also discuss the consensus protocol of Ripple. Contrary to the statement of the Ripple designers, we show that the current choice of parameters does not prevent the occurrence of forks in the system. To remedy this problem, we give a necessary and sufficient condition to prevent any fork in the system. Finally, we analyze the current usage patterns and trade dynamics in Ripple by extracting information from the Ripple global ledger. As far as we are aware, this is the first contribution which sheds light on the current deployment of the Ripple system.",
"title": ""
},
{
"docid": "7bd421d61df521c300740f4ed6789fa5",
"text": "Breast cancer has become a common disease around the world. Expert systems are valuable tools that have been successful for the disease diagnosis. In this research, we accordingly develop a new knowledge-based system for classification of breast cancer disease using clustering, noise removal, and classification techniques. Expectation Maximization (EM) is used as a clustering method to cluster the data in similar groups. We then use Classification and Regression Trees (CART) to generate the fuzzy rules to be used for the classification of breast cancer disease in the knowledge-based system of fuzzy rule-based reasoning method. To overcome the multi-collinearity issue, we incorporate Principal Component Analysis (PCA) in the proposed knowledge-based system. Experimental results on Wisconsin Diagnostic Breast Cancer and Mammographic mass datasets show that proposed methods remarkably improves the prediction accuracy of breast cancer. The proposed knowledge-based system can be used as a clinical decision support system to assist medical practitioners in the healthcare practice.",
"title": ""
},
{
"docid": "89f4d45214a31a298d6db8e2c2b3cd12",
"text": "This paper discusses the semantics of weighted argumentation graphs that are bipolar, i.e. contain both attacks and support graphs. The work builds on previous work by Amgoud, Ben-Naim et. al. [1, 2], which presents and compares several semantics for argumentation graphs that contain only supports or only attacks relationships, respectively.",
"title": ""
},
{
"docid": "b3947afb7856b0ffd5983f293ca508b9",
"text": "High gain low profile slotted cavity with substrate integrated waveguide (SIW) is presented using TE440 high order mode. The proposed antenna is implemented to achieve 16.4 dBi high gain at 28 GHz with high radiation efficiency of 98%. Furthermore, the proposed antenna has a good radiation pattern. Simulated results using CST and HFSS software are presented and discussed. Several advantages such as low profile, low cost, light weight, small size, and easy implementation make the proposed antenna suitable for millimeter-wave wireless communications.",
"title": ""
},
{
"docid": "e4347c1b3df0bf821f552ef86a17a8c8",
"text": "Volumetric lesion segmentation via medical imaging is a powerful means to precisely assess multiple time-point lesion/tumor changes. Because manual 3D segmentation is prohibitively time consuming and requires radiological experience, current practices rely on an imprecise surrogate called response evaluation criteria in solid tumors (RECIST). Despite their coarseness, RECIST marks are commonly found in current hospital picture and archiving systems (PACS), meaning they can provide a potentially powerful, yet extraordinarily challenging, source of weak supervision for full 3D segmentation. Toward this end, we introduce a convolutional neural network based weakly supervised self-paced segmentation (WSSS) method to 1) generate the initial lesion segmentation on the axial RECISTslice; 2) learn the data distribution on RECIST-slices; 3) adapt to segment the whole volume slice by slice to finally obtain a volumetric segmentation. In addition, we explore how super-resolution images (2 ∼ 5 times beyond the physical CT imaging), generated from a proposed stacked generative adversarial network, can aid the WSSS performance. We employ the DeepLesion dataset, a comprehensive CTimage lesion dataset of 32, 735 PACS-bookmarked findings, which include lesions, tumors, and lymph nodes of varying sizes, categories, body regions and surrounding contexts. These are drawn from 10, 594 studies of 4, 459 patients. We also validate on a lymph-node dataset, where 3D ground truth masks are available for all images. For the DeepLesion dataset, we report mean Dice coefficients of 93% on RECIST-slices and 76% in 3D lesion volumes. We further validate using a subjective user study, where an experienced ∗Indicates equal contribution. †This work is done during Jinzheng Cai’s internship at National Institutes of Health. Le Lu is now with Nvidia Corp ([email protected]). CN N Initial 2D Segmentation Self-Paced 3D Segmentation CN N CN N CN N Image Image",
"title": ""
},
{
"docid": "1a732de3138d5771bea1590bb36f4db6",
"text": "Implanted sensors and actuators in the human body promise in-situ health monitoring and rapid advancements in personalized medicine. We propose a new paradigm where such implants may communicate wirelessly through a technique called as galvanic coupling, which uses weak electrical signals and the conduction properties of body tissues. While galvanic coupling overcomes the problem of massive absorption of RF waves in the body, the unique intra-body channel raises several questions on the topology of the implants and the external (i.e., on skin) data collection nodes. This paper makes the first contributions towards (i) building an energy-efficient topology through optimal placement of data collection points/relays using measurement-driven tissue channel models, and (ii) balancing the energy consumption over the entire implant network so that the application needs are met. We achieve this via a two-phase iterative clustering algorithm for the implants and formulate an optimization problem that decides the position of external data-gathering points. Our theoretical results are validated via simulations and experimental studies on real tissues, with demonstrated increase in the network lifetime.",
"title": ""
},
{
"docid": "a433ebaeeb5dc5b68976b3ecb770c0cd",
"text": "1 abstract The importance of the inspection process has been magniied by the requirements of the modern manufacturing environment. In electronics mass-production manufacturing facilities, an attempt is often made to achieve 100 % quality assurance of all parts, subassemblies, and nished goods. A variety of approaches for automated visual inspection of printed circuits have been reported over the last two decades. In this survey, algorithms and techniques for the automated inspection of printed circuit boards are examined. A classiication tree for these algorithms is presented and the algorithms are grouped according to this classiication. This survey concentrates mainly on image analysis and fault detection strategies, these also include the state-of-the-art techniques. A summary of the commercial PCB inspection systems is also presented. 2 Introduction Many important applications of vision are found in the manufacturing and defense industries. In particular, the areas in manufacturing where vision plays a major role are inspection, measurements , and some assembly tasks. The order among these topics closely reeects the manufacturing needs. In most mass-production manufacturing facilities, an attempt is made to achieve 100% quality assurance of all parts, subassemblies, and nished products. One of the most diicult tasks in this process is that of inspecting for visual appearance-an inspection that seeks to identify both functional and cosmetic defects. With the advances in computers (including high speed, large memory and low cost) image processing, pattern recognition, and artiicial intelligence have resulted in better and cheaper equipment for industrial image analysis. This development has made the electronics industry active in applying automated visual inspection to manufacturing/fabricating processes that include printed circuit boards, IC chips, photomasks, etc. Nello 1] gives a summary of the machine vision inspection applications in electronics industry. 01",
"title": ""
},
{
"docid": "f2cc1c45ecf32015eb6f0842badafd7c",
"text": "Firms are facing more difficulties with the implementation of strategies than with its formulation. Therefore, this paper examines the linkage between business strategy, project portfolio management, and business success to close the gap between strategy formulation and implementation. Earlier research has found some supporting evidence of a positive relationship between isolated concepts, but so far there is no coherent and integral framework covering the whole cycle from strategy to success. Therefore, the existing research on project portfolio management is extended by the concept of strategic orientation. Based on a literature review, a comprehensive conceptual model considering strategic orientation, project portfolio structuring, project portfolio success, and business success is developed. This model can be used for future empirical research on the influence of strategy on project portfolio management and its success. Furthermore, it can easily be extended e.g. by contextual factors. © 2010 Elsevier Ltd. and IPMA. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
452bfe889d01dccd523ba2c49603cab6
|
Modeling and Control of Three-Port DC/DC Converter Interface for Satellite Applications
|
[
{
"docid": "8b70670fa152dbd5185e80136983ff12",
"text": "This letter proposes a novel converter topology that interfaces three power ports: a source, a bidirectional storage port, and an isolated load port. The proposed converter is based on a modified version of the isolated half-bridge converter topology that utilizes three basic modes of operation within a constant-frequency switching cycle to provide two independent control variables. This allows tight control over two of the converter ports, while the third port provides the power balance in the system. The switching sequence ensures a clamping path for the energy of the leakage inductance of the transformer at all times. This energy is further utilized to achieve zero-voltage switching for all primary switches for a wide range of source and load conditions. Basic steady-state analysis of the proposed converter is included, together with a suggested structure for feedback control. Key experimental results are presented that validate the converter operation and confirm its ability to achieve tight independent control over two power processing paths. This topology promises significant savings in component count and losses for power-harvesting systems. The proposed topology and control is particularly relevant to battery-backed power systems sourced by solar or fuel cells",
"title": ""
}
] |
[
{
"docid": "6718aa3480c590af254a120376822d07",
"text": "This paper proposes a novel method for content-based watermarking based on feature points of an image. At each feature point, the watermark is embedded after scale normalization according to the local characteristic scale. Characteristic scale is the maximum scale of the scale-space representation of an image at the feature point. By binding watermarking with the local characteristics of an image, resilience against a5ne transformations can be obtained easily. Experimental results show that the proposed method is robust against various image processing steps including a5ne transformations, cropping, 7ltering and JPEG compression. ? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ea048488791219be809072862a061444",
"text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .",
"title": ""
},
{
"docid": "060c1f1e08624c3b59610f150d6f27f8",
"text": "As graph models are applied to more widely varying fields, researchers struggle with tools for exploring and analyzing these structures. We describe GUESS, a novel system for graph exploration that combines an interpreted language with a graphical front end that allows researchers to rapidly prototype and deploy new visualizations. GUESS also contains a novel, interactive interpreter that connects the language and interface in a way that facilities exploratory visualization tasks. Our language, Gython, is a domain-specific embedded language which provides all the advantages of Python with new, graph specific operators, primitives, and shortcuts. We highlight key aspects of the system in the context of a large user survey and specific, real-world, case studies ranging from social and knowledge networks to distributed computer network analysis.",
"title": ""
},
{
"docid": "211484ec722f4df6220a86580d7ecba8",
"text": "The widespread use of vision-based surveillance systems has inspired many research efforts on people localization. In this paper, a series of novel image transforms based on the vanishing point of vertical lines is proposed for enhancement of the probabilistic occupancy map (POM)-based people localization scheme. Utilizing the characteristic that the extensions of vertical lines intersect at a vanishing point, the proposed transforms, based on image or ground plane coordinate system, aims at producing transformed images wherein each standing/walking person will have an upright appearance. Thus, the degradation in localization accuracy due to the deviation of camera configuration constraint specified can be alleviated, while the computation efficiency resulted from the applicability of integral image can be retained. Experimental results show that significant improvement in POM-based people localization for more general camera configurations can indeed be achieved with the proposed image transforms.",
"title": ""
},
{
"docid": "41b6bff4b6f3be41903725e39f630722",
"text": "Despite the huge research on crowd on behavior understanding in visual surveillance community, lack of publicly available realistic datasets for evaluating crowd behavioral interaction led not to have a fair common test bed for researchers to compare the strength of their methods in the real scenarios. This work presents a novel crowd dataset contains around 45,000 video clips which annotated by one of the five different fine-grained abnormal behavior categories. We also evaluated two state-of-the-art methods on our dataset, showing that our dataset can be effectively used as a benchmark for fine-grained abnormality detection. The details of the dataset and the results of the baseline methods are presented in the paper.",
"title": ""
},
{
"docid": "58b5be2fadbaacfb658f7d18cec807d3",
"text": "As the growth of rapid prototyping techniques shortens the development life cycle of software and electronic products, usability inquiry methods can play a more significant role during the development life cycle, diagnosing usability problems and providing metrics for making comparative decisions. A need has been realized for questionnaires tailored to the evaluation of electronic mobile products, wherein usability is dependent on both hardware and software as well as the emotional appeal and aesthetic integrity of the design. This research followed a systematic approach to develop a new questionnaire tailored to measure the usability of electronic mobile products. The Mobile Phone Usability Questionnaire (MPUQ) developed throughout this series of studies evaluates the usability of mobile phones for the purpose of making decisions among competing variations in the end-user market, alternatives of prototypes during the development process, and evolving versions during an iterative design process. In addition, the questionnaire can serve as a tool for identifying diagnostic information to improve specific usability dimensions and related interface elements. Employing the refined MPUQ, decision making models were developed using Analytic Hierarchy Process (AHP) and linear regression analysis. Next, a new group of representative mobile users was employed to develop a hierarchical model representing the usability dimensions incorporated in the questionnaire and to assign priorities to each node in the hierarchy. Employing the AHP and regression models, important usability dimensions and questionnaire items for mobile products were identified. Finally, a case study of comparative usability evaluations was performed to validate the MPUQ and models. A computerized support tool was developed to perform redundancy and relevancy analyses for the selection of appropriate questionnaire items. The weighted geometric mean was used to combine multiple numbers of matrices from pairwise comparison based on decision makers’ consistency ratio values for AHP. The AHP and regression models provided important usability dimensions so that mobile device usability practitioners can simply focus on the interface elements related to the decisive usability dimensions in order to improve the usability",
"title": ""
},
{
"docid": "2e29301adf162bb5e9fecea50a25a85a",
"text": "The collection and combination of assessment data in trustworthiness evaluation of cloud service is challenging, notably because QoS value may be missing in offline evaluation situation due to the time-consuming and costly cloud service invocation. Considering the fact that many trustworthiness evaluation problems require not only objective measurement but also subjective perception, this paper designs a novel framework named CSTrust for conducting cloud service trustworthiness evaluation by combining QoS prediction and customer satisfaction estimation. The proposed framework considers how to improve the accuracy of QoS value prediction on quantitative trustworthy attributes, as well as how to estimate the customer satisfaction of target cloud service by taking advantages of the perception ratings on qualitative attributes. The proposed methods are validated through simulations, demonstrating that CSTrust can effectively predict assessment data and release evaluation results of trustworthiness. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7165568feac9cc0bc0c1056b930958b8",
"text": "We describe a 63-year-old woman with an asymptomatic papular eruption on the vulva. Clinically, the lesions showed multiple pin-head-sized whitish papules on the labia major. Histologically, the biopsy specimen showed acantholysis throughout the epidermis with the presence of dyskeratotic cells resembling corps ronds and grains, hyperkeratosis and parakeratosis. These clinical and histological findings were consistent with the diagnosis of papular acantholytic dyskeratosis of the vulva which is a rare disorder, first described in 1984.",
"title": ""
},
{
"docid": "3e1690ae4d61d87edb0e4c3ce40f6a88",
"text": "Despite previous efforts in auditing software manually and automatically, buffer overruns are still being discovered in programs in use. A dynamic bounds checker detects buffer overruns in erroneous software before it occurs and thereby prevents attacks from corrupting the integrity of the system. Dynamic buffer overrun detectors have not been adopted widely because they either (1) cannot guard against all buffer overrun attacks, (2) break existing code, or (3) incur too high an overhead. This paper presents a practical detector called CRED (C Range Error Detector) that avoids each of these deficiencies. CRED finds all buffer overrun attacks as it directly checks for the bounds of memory accesses. Unlike the original referent-object based bounds-checking technique, CRED does not break existing code because it uses a novel solution to support program manipulation of out-of-bounds addresses. Finally, by restricting the bounds checks to strings in a program, CRED’s overhead is greatly reduced without sacrificing protection in the experiments we performed. CRED is implemented as an extension of the GNU C compiler version 3.3.1. The simplicity of our design makes possible a robust implementation that has been tested on over 20 open-source programs, comprising over 1.2 million lines of C code. CRED proved effective in detecting buffer overrun attacks on programs with known vulnerabilities, and is the only tool found to guard against a testbed of 20 different buffer overflow attacks[34]. Finding overruns only on strings impose an overhead of less This research was performed while the first author was at Stanford University, and this material is based upon work supported in part by the National Science Foundation under Grant No. 0086160. than 26% for 14 of the programs, and an overhead of up to 130% for the remaining six, while the previous state-ofthe-art bounds checker by Jones and Kelly breaks 60% of the programs and is 12 times slower. Incorporating wellknown techniques for optimizing bounds checking into CRED could lead to further performance improvements.",
"title": ""
},
{
"docid": "59a49feef4e3a79c5899fede208a183c",
"text": "This study proposed and tested a model of consumer online buying behavior. The model posits that consumer online buying behavior is affected by demographics, channel knowledge, perceived channel utilities, and shopping orientations. Data were collected by a research company using an online survey of 999 U.S. Internet users, and were cross-validated with other similar national surveys before being used to test the model. Findings of the study indicated that education, convenience orientation, Página 1 de 20 Psychographics of the Consumers in Electronic Commerce 11/10/01 http://www.ascusc.org/jcmc/vol5/issue2/hairong.html experience orientation, channel knowledge, perceived distribution utility, and perceived accessibility are robust predictors of online buying status (frequent online buyer, occasional online buyer, or non-online buyer) of Internet users. Implications of the findings and directions for future research were discussed.",
"title": ""
},
{
"docid": "21943e640ce9b56414994b5df504b1a6",
"text": "It is a preferable method to transfer power wirelessly using contactless slipring systems for rotary applications. The current single or multiple-unit single-phase systems often have limited power transfer capability, so they may not be able to meet the load requirements. This paper presents a contactless slipring system based on axially traveling magnetic field that can achieve a high output power level. A new index termed mutual inductance per pole is introduced to simplify the analysis of the mutually coupled poly-phase system to a single-phase basis. Both simulation and practical results have shown that the proposed system can transfer 2.7 times more power than a multiple-unit (six individual units) single-phase system with the same amount of ferrite and copper materials at higher power transfer efficiency. It has been found that the new system can achieve about 255.6 W of maximum power at 97% efficiency, compared to 68.4 W at 90% of a multiple-unit (six individual units) single-phase system.",
"title": ""
},
{
"docid": "caa7ecc11fc36950d3e17be440d04010",
"text": "In this paper, a comparative study of routing protocols is performed in a hybrid network to recommend the best routing protocol to perform load balancing for Internet traffic. Open Shortest Path First (OSPF), Interior Gateway Routing Protocol (IGRP) and Intermediate System to Intermediate System (IS-IS) routing protocols are compared in OPNET modeller 14 to investigate their capability of ensuring fair distribution of traffic in a hybrid network. The network simulated is scaled to a campus. The network loads are varied in size and performance study is made by running simulations with all the protocols. The only considered performance factors for observation are packet drop, network delay, throughput and network load. IGRP presented better performance as compared to other protocols. The benefit of using IGRP is reduced packet drop, reduced network delay, increased throughput while offering relative better distribution of traffic in a hybrid network.",
"title": ""
},
{
"docid": "a74081f7108e62fadb48446255dd246b",
"text": "Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep fuzzy neural network, namely deep evolving fuzzy neural networks (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play little role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely Generic Classifier (gClass), drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of input space dimension due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using six datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four state-ofthe art data stream methods and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.",
"title": ""
},
{
"docid": "2c442933c4729e56e5f4f46b5b8071d6",
"text": "Wireless body area networks consist of several devices placed on the human body, sensing vital signs and providing remote recognition of health disorders. Low power consumption is crucial in these networks. A new energy-efficient topology is provided in this paper, considering relay and sensor nodes' energy consumption and network maintenance costs. In this topology design, relay nodes, placed on the cloth, are used to help the sensor nodes forwarding data to the sink. Relay nodes' situation is determined such that the relay nodes' energy consumption merges the uniform distribution. Simulation results show that the proposed method increases the lifetime of the network with nearly uniform distribution of the relay nodes' energy consumption. Furthermore, this technique simultaneously reduces network maintenance costs and continuous replacements of the designer clothing. The proposed method also determines the way by which the network traffic is split and multipath routed to the sink.",
"title": ""
},
{
"docid": "48088cbe2f40cbbb32beb53efa224f3b",
"text": "Pain is a nonmotor symptom that substantially affects the quality of life of at least one-third of patients with Parkinson disease (PD). Interestingly, patients with PD frequently report different types of pain, and a successful approach to distinguish between these pains is required so that effective treatment strategies can be established. Differences between these pains are attributable to varying peripheral pain mechanisms, the role of motor symptoms in causing or amplifying pain, and the role of PD pathophysiology in pain processing. In this Review, we propose a four-tier taxonomy to improve classification of pain in PD. This taxonomy assigns nociceptive, neuropathic and miscellaneous pains to distinct categories, as well as further characterization into subcategories. Currently, treatment of pain in PD is based on empirical data only, owing to a lack of controlled studies. The facultative symptom of 'dopaminergically maintained pain' refers to pain that benefits from antiparkinson medication. Here, we also present additional pharmacological and nonpharmacological treatment approaches, which can be targeted to a specific pain following classification using our taxonomy.",
"title": ""
},
{
"docid": "936cdd4b58881275485739518ccb4f85",
"text": "Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems — BN’s error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN’s usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN’s computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code.",
"title": ""
},
{
"docid": "9fe93bda131467c7851d75644de83534",
"text": "The Banking industry has undergone a dramatic change since internet penetration and the concept of internet banking. Internet banking is defined as an internet portal, through which customers can use different kinds of banking services. Internet banking has major effects on banking relationships. The primary objective of this research is to identify the factors that influence internet banking adoption. Using PLS, a model is successfully proved and it is found that internet banking is influenced by its perceived reliability, Perceived ease of use and Perceived usefulness. In the marketing process of internet banking services marketing experts should emphasize these benefits its adoption provides and awareness can also be improved to attract consumers’ attention to internet banking services. Factors Influencing Consumer Adoption of Internet Banking in India 1 Assistant professor, Karunya School of Management, Karunya University, Coimbatore, India. Email: [email protected]",
"title": ""
},
{
"docid": "959c3d0aaa3c17ab43f0362fd03f7b98",
"text": "In this thesis, channel estimation techniques are studied and investigated for a novel multicarrier modulation scheme, Universal Filtered Multi-Carrier (UFMC). UFMC (a.k.a. UFOFDM) is considered as a candidate for the 5th Generation of wireless communication systems, which aims at replacing OFDM and enhances system robustness and performance in relaxed synchronization condition e.g. time-frequency misalignment. Thus, it may more efficiently support Machine Type Communication (MTC) and Internet of Things (IoT), which are considered as challenging applications for next generation of wireless communication systems. There exist many methods of channel estimation, time-frequency synchronization and equalization for classical CP-OFDM systems. Pilot-aided methods known from CP-OFDM are adopted and applied to UFMC systems. The performance of UFMC is then compared with CP-OFDM.",
"title": ""
},
{
"docid": "9b8e9b5fa9585cf545d6ab82483c9f38",
"text": "A survey of bacterial and archaeal genomes shows that many Tn7-like transposons contain minimal type I-F CRISPR-Cas systems that consist of fused cas8f and cas5f, cas7f, and cas6f genes and a short CRISPR array. Several small groups of Tn7-like transposons encompass similarly truncated type I-B CRISPR-Cas. This minimal gene complement of the transposon-associated CRISPR-Cas systems implies that they are competent for pre-CRISPR RNA (precrRNA) processing yielding mature crRNAs and target binding but not target cleavage that is required for interference. Phylogenetic analysis demonstrates that evolution of the CRISPR-Cas-containing transposons included a single, ancestral capture of a type I-F locus and two independent instances of type I-B loci capture. We show that the transposon-associated CRISPR arrays contain spacers homologous to plasmid and temperate phage sequences and, in some cases, chromosomal sequences adjacent to the transposon. We hypothesize that the transposon-encoded CRISPR-Cas systems generate displacement (R-loops) in the cognate DNA sites, targeting the transposon to these sites and thus facilitating their spread via plasmids and phages. These findings suggest the existence of RNA-guided transposition and fit the guns-for-hire concept whereby mobile genetic elements capture host defense systems and repurpose them for different stages in the life cycle of the element.",
"title": ""
}
] |
scidocsrr
|
09ac51c093547175df6b553cc17f7670
|
Drivable Road Detection with 3D Point Clouds Based on the MRF for Intelligent Vehicle
|
[
{
"docid": "3bc9e621a0cfa7b8791ae3fb94eff738",
"text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.",
"title": ""
}
] |
[
{
"docid": "be43b90cce9638b0af1c3143b6d65221",
"text": "Reasoning on provenance information and property propagation is of significant importance in e-science since it helps scientists manage derived metadata in order to understand the source of an object, reproduce results of processes and facilitate quality control of results and processes. In this paper we introduce a simple, yet powerful reasoning mechanism based on property propagation along the transitive part-of and derivation chains, in order to trace the provenance of an object and to carry useful inferences. We apply our reasoning in semantic repositories using the CIDOC-CRM conceptual schema and its extension CRMdig, which has been develop for representing the digital and empirical provenance of digi-",
"title": ""
},
{
"docid": "e85b761664a01273a10819566699bf4f",
"text": "Julius Bernstein belonged to the Berlin school of “organic physicists” who played a prominent role in creating modern physiology and biophysics during the second half of the nineteenth century. He trained under du Bois-Reymond in Berlin, worked with von Helmholtz in Heidelberg, and finally became Professor of Physiology at the University of Halle. Nowadays his name is primarily associated with two discoveries: (1) The first accurate description of the action potential in 1868. He developed a new instrument, a differential rheotome (= current slicer) that allowed him to resolve the exact time course of electrical activity in nerve and muscle and to measure its conduction velocity. (2) His ‘Membrane Theory of Electrical Potentials’ in biological cells and tissues. This theory, published by Bernstein in 1902, provided the first plausible physico-chemical model of bioelectric events; its fundamental concepts remain valid to this day. Bernstein pursued an intense and long-range program of research in which he achieved a new level of precision and refinement by formulating quantitative theories supported by exact measurements. The innovative design and application of his electromechanical instruments were milestones in the development of biomedical engineering techniques. His seminal work prepared the ground for hypotheses and experiments on the conduction of the nervous impulse and ultimately the transmission of information in the nervous system. Shortly after his retirement, Bernstein (1912) summarized his electrophysiological work and extended his theoretical concepts in a book Elektrobiologie that became a classic in its field. The Bernstein Centers for Computational Neuroscience recently established at several universities in Germany were named to honor the person and his work.",
"title": ""
},
{
"docid": "82a3fe6dfa81e425eb3aa3404799e72d",
"text": "ABSTRACT: Nonlinear control problem for a missile autopilot is quick adaptation and minimizing the desired acceleration to missile nonlinear model. For this several missile controllers are provided which are on the basis of nonlinear control or design of linear control for the linear missile system. In this paper a linear control of dynamic matrix type is proposed for the linear model of missile. In the first section, an approximate two degrees of freedom missile model, known as Horton model, is introduced. Then, the nonlinear model is converted into observable and controllable model base on the feedback linear rule of input-state mode type. Finally for design of control model, the dynamic matrix flight control, which is one of the linear predictive control design methods on the basis of system step response information, is used. This controller is a recursive method which calculates the development of system input by definition and optimization of a cost function and using system dynamic matrix. So based on the applied inputs and previous output information, the missile acceleration would be calculated. Unlike other controllers, this controller doesn’t require an interaction effect and accurate model. Although, it has predicting and controlling horizon, there isn’t such horizons in non-predictive methods.",
"title": ""
},
{
"docid": "c966c67c098e8178e6c05b6d446f6dd3",
"text": "Data are today an asset more critical than ever for all organizations we may think of. Recent advances and trends, such as sensor systems, IoT, cloud computing, and data analytics, are making possible to pervasively, efficiently, and effectively collect data. However for data to be used to their full power, data security and privacy are critical. Even though data security and privacy have been widely investigated over the past thirty years, today we face new difficult data security and privacy challenges. Some of those challenges arise from increasing privacy concerns with respect to the use of data and from the need of reconciling privacy with the use of data for security in applications such as homeland protection, counterterrorism, and health, food and water security. Other challenges arise because the deployments of new data collection and processing devices, such as those used in IoT systems, increase the data attack surface. In this paper, we discuss relevant concepts and approaches for data security and privacy, and identify research challenges that must be addressed by comprehensive solutions to data security and privacy.",
"title": ""
},
{
"docid": "c1a76ba2114ec856320651489ee9b28b",
"text": "The boost of available digital media has led to a significant increase in derivative work. With tools for manipulating objects becoming more and more mature, it can be very difficult to determine whether one piece of media was derived from another one or tampered with. As derivations can be done with malicious intent, there is an urgent need for reliable and easily usable tampering detection methods. However, even media considered semantically untampered by humans might have already undergone compression steps or light post-processing, making automated detection of tampering susceptible to false positives. In this paper, we present the PSBattles dataset which is gathered from a large community of image manipulation enthusiasts and provides a basis for media derivation and manipulation detection in the visual domain. The dataset consists of 102’028 images grouped into 11’142 subsets, each containing the original image as well as a varying number of manipulated derivatives.",
"title": ""
},
{
"docid": "e54c308623cb2a2f97e3075e572fdadb",
"text": "Augmented Reality is becoming increasingly popular. The success of a platform is typically observed by measuring the health of the software ecosystem surrounding it. In this paper, we take a closer look at the Vuforia ecosystem’s health by mining the Vuforia platform application repository. It is observed that the developer ecosystem is the strength of the platform. We also determine that Vuforia could be the biggest player in the market if they lay its focus on specific types of app",
"title": ""
},
{
"docid": "049a7164a973fb515ed033ba216ec344",
"text": "Modern vehicle fleets, e.g., for ridesharing platforms and taxi companies, can reduce passengers' waiting times by proactively dispatching vehicles to locations where pickup requests are anticipated in the future. Yet it is unclear how to best do this: optimal dispatching requires optimizing over several sources of uncertainty, including vehicles' travel times to their dispatched locations, as well as coordinating between vehicles so that they do not attempt to pick up the same passenger. While prior works have developed models for this uncertainty and used them to optimize dispatch policies, in this work we introduce a model-free approach. Specifically, we propose MOVI, a Deep Q-network (DQN)-based framework that directly learns the optimal vehicle dispatch policy. Since DQNs scale poorly with a large number of possible dispatches, we streamline our DQN training and suppose that each individual vehicle independently learns its own optimal policy, ensuring scalability at the cost of less coordination between vehicles. We then formulate a centralized receding-horizon control (RHC) policy to compare with our DQN policies. To compare these policies, we design and build MOVI as a large-scale realistic simulator based on 15 million taxi trip records that simulates policy-agnostic responses to dispatch decisions. We show that the DQN dispatch policy reduces the number of unserviced requests by 76% compared to without dispatch and 20% compared to the RHC approach, emphasizing the benefits of a model-free approach and suggesting that there is limited value to coordinating vehicle actions. This finding may help to explain the success of ridesharing platforms, for which drivers make individual decisions.",
"title": ""
},
{
"docid": "18d28769691fb87a6ebad5aae3eae078",
"text": "The current head Injury Assessment Reference Values (IARVs) for the child dummies are based in part on scaling adult and animal data and on reconstructions of real world accident scenarios. Reconstruction of well-documented accident scenarios provides critical data in the evaluation of proposed IARV values, but relatively few accidents are sufficiently documented to allow for accurate reconstructions. This reconstruction of a well documented fatal-fall involving a 23-month old child supplies additional data for IARV assessment. The videotaped fatal-fall resulted in a frontal head impact onto a carpet-covered cement floor. The child suffered an acute right temporal parietal subdural hematoma without skull fracture. The fall dynamics were reconstructed in the laboratory and the head linear and angular accelerations were quantified using the CRABI-18 Anthropomorphic Test Device (ATD). Peak linear acceleration was 125 ± 7 g (range 114-139), HIC15 was 335 ± 115 (Range 257-616), peak angular velocity was 57± 16 (Range 26-74), and peak angular acceleration was 32 ± 12 krad/s 2 (Range 15-56). The results of the CRABI-18 fatal fall reconstruction were consistent with the linear and rotational tolerances reported in the literature. This study investigates the usefulness of the CRABI-18 anthropomorphic testing device in forensic investigations of child head injury and aids in the evaluation of proposed IARVs for head injury. INTRODUCTION Defining the mechanisms of injury and the associated tolerance of the pediatric head to trauma has been the focus of a great deal of research and effort. In contrast to the multiple cadaver experimental studies of adult head trauma published in the literature, there exist only a few experimental studies of infant head injury using human pediatric cadaveric tissue [1-6]. While these few studies have been very informative, due to limitations in sample size, experimental equipment, and study objectives, current estimates of the tolerance of the pediatric head are based on relatively few pediatric cadaver data points combined with the use of scaled adult and animal data. In effort to assess and refine these tolerance estimates, a number of researchers have performed detailed accident reconstructions of well-documented injury scenarios [7-11] . The reliability of the reconstruction data are predicated on the ability to accurately reconstruct the actual accident and quantify the result in a useful injury metric(s). These resulting injury metrics can then be related to the injuries of the child and this, when combined with other reliable reconstructions, can form an important component in evaluating pediatric injury mechanisms and tolerance. Due to limitations in case identification, data collection, and resources, relatively few reconstructions of pediatric accidents have been performed. In this study, we report the results of the reconstruction of an uncharacteristically well documented fall resulting in a fatal head injury of a 23 month old child. The case study was previously reported as case #5 by Plunkett [12]. BACKGROUND As reported by Plunkett (2001), A 23-month-old was playing on a plastic gym set in the garage at her home with her older brother. She had climbed the attached ladder to the top rail above the platform and was straddling the rail, with her feet 0.70 meters (28 inches) above the floor. She lost her balance and fell headfirst onto a 1-cm (3⁄8-inch) thick piece of plush carpet remnant covering the concrete floor. She struck the carpet first with her outstretched hands, then with the right front side of her forehead, followed by her right shoulder. Her grandmother had been watching the children play and videotaped the fall. She cried after the fall but was alert",
"title": ""
},
{
"docid": "8d4288ddbdee91e934e6a98734285d1a",
"text": "Find loads of the designing social interfaces principles patterns and practices for improving the user experience book catalogues in this site as the choice of you visiting this page. You can also join to the website book library that will show you numerous books from any types. Literature, science, politics, and many more catalogues are presented to offer you the best book to find. The book that really makes you feels satisfied. Or that's the book that will save you from your job deadline.",
"title": ""
},
{
"docid": "160e06b33d6db64f38480c62989908fb",
"text": "A theoretical and experimental study has been performed on a low-profile, 2.4-GHz dipole antenna that uses a frequency-selective surface (FSS) with varactor-tuned unit cells. The tunable unit cell is a square patch with a small aperture on either side to accommodate the varactor diodes. The varactors are placed only along one dimension to avoid the use of vias and simplify the dc bias network. An analytical circuit model for this type of electrically asymmetric unit cell is shown. The measured data demonstrate tunability from 2.15 to 2.63 GHz with peak gains at broadside that range from 3.7- to 5-dBi and instantaneous bandwidths of 50 to 280 MHz within the tuning range. It is shown that tuning for optimum performance in the presence of a human-core body phantom can be achieved. The total antenna thickness is approximately λ/45.",
"title": ""
},
{
"docid": "572867885a16afc0af6a8ed92632a2a7",
"text": "We present an Efficient Log-based Troubleshooting(ELT) system for cloud computing infrastructures. ELT adopts a novel hybrid log mining approach that combines coarse-grained and fine-grained log features to achieve both high accuracy and low overhead. Moreover, ELT can automatically extract key log messages and perform invariant checking to greatly simplify the troubleshooting task for the system administrator. We have implemented a prototype of the ELT system and conducted an extensive experimental study using real management console logs of a production cloud system and a Hadoop cluster. Our experimental results show that ELT can achieve more efficient and powerful troubleshooting support than existing schemes. More importantly, ELT can find software bugs that cannot be detected by current cloud system management practice.",
"title": ""
},
{
"docid": "0c43c0dbeaff9afa0e73bddb31c7dac0",
"text": "A compact dual-band dielectric resonator antenna (DRA) using a parasitic c-slot fed by a microstrip line is proposed. In this configuration, the DR performs the functions of an effective radiator and the feeding structure of the parasitic c-slot in the ground plane. By optimizing the proposed structure parameters, the structure resonates at two different frequencies. One is from the DRA with the broadside patterns and the other from the c-slot with the dipole-like patterns. In order to determine the performance of varying design parameters on bandwidth and resonance frequency, the parametric study is carried out using simulation software High-Frequency Structure Simulator and experimental results. The measured and simulated results show excellent agreement.",
"title": ""
},
{
"docid": "1465b6c38296dfc46f8725dca5179cf1",
"text": "A brief introduction is given to the actual mechanics of simulated annealing, and a simple example from an IC layout is used to illustrate how these ideas can be applied. The complexities and tradeoffs involved in attacking a realistically complex design problem are illustrated by dissecting two very different annealing algorithms for VLSI chip floorplanning. Several current research problems aimed at determining more precisely how and why annealing algorithms work are examined. Some philosophical issues raised by the introduction of annealing are discussed.<<ETX>>",
"title": ""
},
{
"docid": "e72c88990ad5778eea9ce6dabace4326",
"text": "Studies in humans and rodents have suggested that behavior can at times be \"goal-directed\"-that is, planned, and purposeful-and at times \"habitual\"-that is, inflexible and automatically evoked by stimuli. This distinction is central to conceptions of pathological compulsion, as in drug abuse and obsessive-compulsive disorder. Evidence for the distinction has primarily come from outcome devaluation studies, in which the sensitivity of a previously learned behavior to motivational change is used to assay the dominance of habits versus goal-directed actions. However, little is known about how habits and goal-directed control arise. Specifically, in the present study we sought to reveal the trial-by-trial dynamics of instrumental learning that would promote, and protect against, developing habits. In two complementary experiments with independent samples, participants completed a sequential decision task that dissociated two computational-learning mechanisms, model-based and model-free. We then tested for habits by devaluing one of the rewards that had reinforced behavior. In each case, we found that individual differences in model-based learning predicted the participants' subsequent sensitivity to outcome devaluation, suggesting that an associative mechanism underlies a bias toward habit formation in healthy individuals.",
"title": ""
},
{
"docid": "cc5fae51afaac0119e3cac1cbdae722e",
"text": "The healthcare organization (hospitals, medical centers) should provide quality services at affordable costs. Quality of service implies diagnosing patients accurately and suggesting treatments that are effective. To achieve a correct and cost effective treatment, computer-based information and/or decision support Systems can be developed to full-fill the task. The generated information systems typically consist of large amount of data. Health care organizations must have ability to analyze these data. The Health care system includes data such as resource management, patient centric and transformed data. Data mining techniques are used to explore, analyze and extract these data using complex algorithms in order to discover unknown patterns. Many data mining techniques have been used in the diagnosis of heart disease with good accuracy. Neural Networks have shown great potential to be applied in the development of prediction system for various type of heart disease. This paper investigates the benefits and overhead of various neural network models for heart disease prediction.",
"title": ""
},
{
"docid": "a354f6c1d6411e4dec02031561c93ebd",
"text": "An operating system (OS) kernel is a critical software regarding to reliability and efficiency. Quality of modern OS kernels is already high enough. However, this is not the case for kernel modules, like, for example, device drivers that, due to various reasons, have a significantly lower level of quality. One of the most critical and widespread bugs in kernel modules are violations of rules for correct usage of a kernel API. One can find all such violations in modules or can prove their correctness using static verification tools that need contract specifications describing obligations of a kernel and modules relative to each other. This paper considers present methods and toolsets for static verification of kernel modules for different OSs. A new method for static verification of Linux kernel modules is proposed. This method allows one to configure the verification process at all its stages. It is shown how it can be adapted for checking kernel components of other OSs. An architecture of a configurable toolset for static verification of Linux kernel modules that implements the proposed method is described, and results of its practical application are presented. Directions for further development of the proposed method are discussed in conclusion.",
"title": ""
},
{
"docid": "8c29241ff4fd2f7c01043307a10c1726",
"text": "We are experiencing an abundance of Internet-of-Things (IoT) middleware solutions that provide connectivity for sensors and actuators to the Internet. To gain a widespread adoption, these middleware solutions, referred to as platforms, have to meet the expectations of different players in the IoT ecosystem, including device providers, application developers, and end-users, among others. In this article, we evaluate a representative sample of these platforms, both proprietary and open-source, on the basis of their ability to meet the expectations of different IoT users. The evaluation is thus more focused on how ready and usable these platforms are for IoT ecosystem players, rather than on the peculiarities of the underlying technological layers. The evaluation is carried out as a gap analysis of the current IoT landscape with respect to (i) the support for heterogeneous sensing and actuating technologies, (ii) the data ownership and its implications for security and privacy, (iii) data processing and data sharing capabilities, (iv) the support offered to application developers, (v) the completeness of an IoT ecosystem, and (vi) the availability of dedicated IoT marketplaces. The gap analysis aims to highlight the deficiencies of today’s solutions to improve their integration to tomorrow’s ecosystems. In order to strengthen the finding of our analysis, we conducted a survey among the partners of the Finnish IoT program, counting over 350 experts, to evaluate the most critical issues for the development of future IoT platforms. Based on the results of our analysis and our survey, we conclude this article with a list of recommendations for extending these IoT platforms in order to fill in the gaps.",
"title": ""
},
{
"docid": "9e4b7e87229dfb02c2600350899049be",
"text": "This paper presents an efficient and reliable swarm intelligence-based approach, namely elitist-mutated particle swarm optimization EMPSO technique, to derive reservoir operation policies for multipurpose reservoir systems. Particle swarm optimizers are inherently distributed algorithms, in which the solution for a problem emerges from the interactions between many simple individuals called particles. In this study the standard particle swarm optimization PSO algorithm is further improved by incorporating a new strategic mechanism called elitist-mutation to improve its performance. The proposed approach is first tested on a hypothetical multireservoir system, used by earlier researchers. EMPSO showed promising results, when compared with other techniques. To show practical utility, EMPSO is then applied to a realistic case study, the Bhadra reservoir system in India, which serves multiple purposes, namely irrigation and hydropower generation. To handle multiple objectives of the problem, a weighted approach is adopted. The results obtained demonstrate that EMPSO is consistently performing better than the standard PSO and genetic algorithm techniques. It is seen that EMPSO is yielding better quality solutions with less number of function evaluations. DOI: 10.1061/ ASCE 0733-9496 2007 133:3 192 CE Database subject headings: Reservoir operation; Optimization; Irrigation; Hydroelectric power generation.",
"title": ""
},
{
"docid": "11355807aa6b24f2eade366f391f0338",
"text": "Object detectors have hugely profited from moving towards an end-to-end learning paradigm: proposals, fea tures, and the classifier becoming one neural network improved results two-fold on general object detection. One indispensable component is non-maximum suppression (NMS), a post-processing algorithm responsible for merging all detections that belong to the same object. The de facto standard NMS algorithm is still fully hand-crafted, suspiciously simple, and — being based on greedy clustering with a fixed distance threshold — forces a trade-off between recall and precision. We propose a new network architecture designed to perform NMS, using only boxes and their score. We report experiments for person detection on PETS and for general object categories on the COCO dataset. Our approach shows promise providing improved localization and occlusion handling.",
"title": ""
},
{
"docid": "d8fc658756c4dd826b90a7e126e2e44d",
"text": "Knowledge graph embedding refers to projecting entities and relations in knowledge graph into continuous vector spaces. State-of-the-art methods, such as TransE, TransH, and TransR build embeddings by treating relation as translation from head entity to tail entity. However, previous models can not deal with reflexive/one-to-many/manyto-one/many-to-many relations properly, or lack of scalability and efficiency. Thus, we propose a novel method, flexible translation, named TransF, to address the above issues. TransF regards relation as translation between head entity vector and tail entity vector with flexible magnitude. To evaluate the proposed model, we conduct link prediction and triple classification on benchmark datasets. Experimental results show that our method remarkably improve the performance compared with several state-of-the-art baselines.",
"title": ""
}
] |
scidocsrr
|
c4fecb931da091a5614c02f88718a6a7
|
Major Traits / Qualities of Leadership
|
[
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "ecbdb56c52a59f26cf8e33fc533d608f",
"text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.",
"title": ""
}
] |
[
{
"docid": "5fe1fa98c953d778ee27a104802e5f2b",
"text": "We describe two general approaches to creating document-level maps of science. To create a local map one defines and directly maps a sample of data, such as all literature published in a set of information science journals. To create a global map of a research field one maps ‘all of science’ and then locates a literature sample within that full context. We provide a deductive argument that global mapping should create more accurate partitions of a research field than local mapping, followed by practical reasons why this may not be so. The field of information science is then mapped at the document level using both local and global methods to provide a case illustration of the differences between the methods. Textual coherence is used to assess the accuracies of both maps. We find that document clusters in the global map have significantly higher coherence than those in the local map, and that the global map provides unique insights into the field of information science that cannot be discerned from the local map. Specifically, we show that information science and computer science have a large interface and that computer science is the more progressive discipline at that interface. We also show that research communities in temporally linked threads have a much higher coherence than isolated communities, and that this feature can be used to predict which threads will persist into a subsequent year. Methods that could increase the accuracy of both local and global maps in the future are also discussed.",
"title": ""
},
{
"docid": "b252aea38a537a22ab34fdf44e9443d2",
"text": "The objective of this study is to describe the case of a patient presenting advanced epidermoid carcinoma of the penis associated to myiasis. A 41-year-old patient presenting with a necrotic lesion of the distal third of the penis infested with myiasis was attended in the emergency room of our hospital and was submitted to an urgent penectomy. This is the first case of penile cancer associated to myiasis described in the literature. This case reinforces the need for educative campaigns to reduce the incidence of this disease in developing countries.",
"title": ""
},
{
"docid": "e6db8cbbb3f7bac211f672ffdef44fb6",
"text": "This paper aims to develop a benchmarking framework that evaluates the cold chain performance of a company, reveals its strengths and weaknesses and finally identifies and prioritizes potential alternatives for continuous improvement. A Delphi-AHP-TOPSIS based methodology has divided the whole benchmarking into three stages. The first stage is Delphi method, where identification, synthesis and prioritization of key performance factors and sub-factors are done and a novel consistent measurement scale is developed. The second stage is Analytic Hierarchy Process (AHP) based cold chain performance evaluation of a selected company against its competitors, so as to observe cold chain performance of individual factors and sub-factors, as well as overall performance index. And, the third stage is Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based assessment of possible alternatives for the continuous improvement of the company’s cold chain performance. Finally a demonstration of proposed methodology in a retail industry is presented for better understanding. The proposed framework can assist managers to comprehend the present strengths and weaknesses of their cold. They can identify good practices from the market leader and can benchmark them for improving weaknesses keeping in view the current operational conditions and strategies of the company. This framework also facilitates the decision makers to better understand the complex relationships of the relevant cold chain performance factors in decision-making. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "72420289372499b50e658ef0957a3ad9",
"text": "A ripple current cancellation technique injects AC current into the output voltage bus of a converter that is equal and opposite to the normal converter ripple current. The output current ripple is ideally zero, leading to ultra-low noise converter output voltages. The circuit requires few additional components, no active circuits are required. Only an additional filter inductor winding, an auxiliary inductor, and small capacitor are required. The circuit utilizes leakage inductance of the modified filter inductor as all or part of the required auxiliary inductance. Ripple cancellation is independent of switching frequency, duty cycle, and other converter parameters. The circuit eliminates ripple current in both continuous conduction mode and discontinuous conduction mode. Experimental results provide better than an 80/spl times/ ripple current reduction.",
"title": ""
},
{
"docid": "19f1a6c9c5faf73b8868164e8bb310c6",
"text": "Holoprosencephaly refers to a spectrum of craniofacial malformations including cyclopia, ethmocephaly, cebocephaly, and premaxillary agenesis. Etiologic heterogeneity is well documented. Chromosomal, genetic, and teratogenic factors have been implicated. Recognition of holoprosencephaly as a developmental field defect stresses the importance of close scrutiny of relatives for mild forms such as single median incisor, hypotelorism, bifid uvula, or pituitary deficiency.",
"title": ""
},
{
"docid": "c0b40058d003cdaa80d54aa190e48bc2",
"text": "Visual tracking plays an important role in many computer vision tasks. A common assumption in previous methods is that the video frames are blur free. In reality, motion blurs are pervasive in the real videos. In this paper we present a novel BLUr-driven Tracker (BLUT) framework for tracking motion-blurred targets. BLUT actively uses the information from blurs without performing debluring. Specifically, we integrate the tracking problem with the motion-from-blur problem under a unified sparse approximation framework. We further use the motion information inferred by blurs to guide the sampling process in the particle filter based tracking. To evaluate our method, we have collected a large number of video sequences with significatcant motion blurs and compared BLUT with state-of-the-art trackers. Experimental results show that, while many previous methods are sensitive to motion blurs, BLUT can robustly and reliably track severely blurred targets.",
"title": ""
},
{
"docid": "ea42c551841cc53c84c63f72ee9be0ae",
"text": "Phishing is a prevalent issue of today’s Internet. Previous approaches to counter phishing do not draw on a crucial factor to combat the threat the users themselves. We believe user education about the dangers of the Internet is a further key strategy to combat phishing. For this reason, we developed an Android app, a game called –NoPhish–, which educates the user in the detection of phishing URLs. It is crucial to evaluate NoPhish with respect to its effectiveness and the users’ knowledge retention. Therefore, we conducted a lab study as well as a retention study (five months later). The outcomes of the studies show that NoPhish helps users make better decisions with regard to the legitimacy of URLs immediately after playing NoPhish as well as after some time has passed. The focus of this paper is on the description and the evaluation of both studies. This includes findings regarding those types of URLs that are most difficult to decide on as well as ideas to further improve NoPhish.",
"title": ""
},
{
"docid": "b468726c2901146f1ca02df13936e968",
"text": "Chinchillas have been successfully maintained in captivity for almost a century. They have only recently been recognized as excellent, long-lived, and robust pets. Most of the literature on diseases of chinchillas comes from farmed chinchillas, whereas reports of pet chinchilla diseases continue to be sparse. This review aims to provide information on current, poorly reported disorders of pet chinchillas, such as penile problems, urolithiasis, periodontal disease, otitis media, cardiac disease, pseudomonadal infections, and giardiasis. This review is intended to serve as a complement to current veterinary literature while providing valuable and clinically relevant information for veterinarians treating chinchillas.",
"title": ""
},
{
"docid": "872370f375d779435eb098571f3ab763",
"text": "The aim of this study was to explore the potential of fused-deposition 3-dimensional printing (FDM 3DP) to produce modified-release drug loaded tablets. Two aminosalicylate isomers used in the treatment of inflammatory bowel disease (IBD), 5-aminosalicylic acid (5-ASA, mesalazine) and 4-aminosalicylic acid (4-ASA), were selected as model drugs. Commercially produced polyvinyl alcohol (PVA) filaments were loaded with the drugs in an ethanolic drug solution. A final drug-loading of 0.06% w/w and 0.25% w/w was achieved for the 5-ASA and 4-ASA strands, respectively. 10.5mm diameter tablets of both PVA/4-ASA and PVA/5-ASA were subsequently printed using an FDM 3D printer, and varying the weight and densities of the printed tablets was achieved by selecting the infill percentage in the printer software. The tablets were mechanically strong, and the FDM 3D printing was shown to be an effective process for the manufacture of the drug, 5-ASA. Significant thermal degradation of the active 4-ASA (50%) occurred during printing, however, indicating that the method may not be appropriate for drugs when printing at high temperatures exceeding those of the degradation point. Differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA) of the formulated blends confirmed these findings while highlighting the potential of thermal analytical techniques to anticipate drug degradation issues in the 3D printing process. The results of the dissolution tests conducted in modified Hank's bicarbonate buffer showed that release profiles for both drugs were dependent on both the drug itself and on the infill percentage of the tablet. Our work here demonstrates the potential role of FDM 3DP as an efficient and low-cost alternative method of manufacturing individually tailored oral drug dosage, and also for production of modified-release formulations.",
"title": ""
},
{
"docid": "1b30c14536db1161b77258b1ce213fbb",
"text": "Click-through rate (CTR) prediction and relevance ranking are two fundamental problems in web advertising. In this study, we address the problem of modeling the relationship between CTR and relevance for sponsored search. We used normalized relevance scores comparable across all queries to represent relevance when modeling with CTR, instead of directly using human judgment labels or relevance scores valid only within same query. We classified clicks by identifying their relevance quality using dwell time and session information, and compared all clicks versus selective clicks effects when modeling relevance.\n Our results showed that the cleaned click signal outperforms raw click signal and others we explored, in terms of relevance score fitting. The cleaned clicks include clicks with dwell time greater than 5 seconds and last clicks in session. Besides traditional thoughts that there is no linear relation between click and relevance, we showed that the cleaned click based CTR can be fitted well with the normalized relevance scores using a quadratic regression model. This relevance-click model could help to train ranking models using processed click feedback to complement expensive human editorial relevance labels, or better leverage relevance signals in CTR prediction.",
"title": ""
},
{
"docid": "ae800ced5663d320fcaca2df6f6bf793",
"text": "Stowage planning for container vessels concerns the core competence of the shipping lines. As such, automated stowage planning has attracted much research in the past two decades, but with few documented successes. In an ongoing project, we are developing a prototype stowage planning system aiming for large containerships. The system consists of three modules: the stowage plan generator, the stability adjustment module, and the optimization engine. This paper mainly focuses on the stability adjustment module. The objective of the stability adjustment module is to check the global ship stability of the stowage plan produced by the stowage plan generator and resolve the stability issues by applying a heuristic algorithm to search for alternative feasible locations for containers that violate some of the stability criteria. We demonstrate that the procedure proposed is capable of solving the stability problems for a large containership with more than 5000 TEUs. Keywords— Automation, Stowage Planning, Local Search, Heuristic algorithm, Stability Optimization",
"title": ""
},
{
"docid": "f289b58d16bf0b3a017a9b1c173cbeb6",
"text": "All hospitalisations for pulmonary arterial hypertension (PAH) in the Scottish population were examined to determine the epidemiological features of PAH. These data were compared with expert data from the Scottish Pulmonary Vascular Unit (SPVU). Using the linked Scottish Morbidity Record scheme, data from all adults aged 16-65 yrs admitted with PAH (idiopathic PAH, pulmonary hypertension associated with congenital heart abnormalities and pulmonary hypertension associated with connective tissue disorders) during the period 1986-2001 were identified. These data were compared with the most recent data in the SPVU database (2005). Overall, 374 Scottish males and females aged 16-65 yrs were hospitalised with incident PAH during 1986-2001. The annual incidence of PAH was 7.1 cases per million population. On December 31, 2002, there were 165 surviving cases, giving a prevalence of PAH of 52 cases per million population. Data from the SPVU were available for 1997-2006. In 2005, the last year with a complete data set, the incidence of PAH was 7.6 cases per million population and the corresponding prevalence was 26 cases per million population. Hospitalisation data from the Scottish Morbidity Record scheme gave higher prevalences of pulmonary arterial hypertension than data from the expert centres (Scotland and France). The hospitalisation data may overestimate the true frequency of pulmonary arterial hypertension in the population, but it is also possible that the expert centres underestimate the true frequency.",
"title": ""
},
{
"docid": "99dcde334931eeb8e20ce7aa3c7982d5",
"text": "We describe a framework for multiscale image analysis in which line segments play a role analogous to the role played by points in wavelet analysis. The framework has five key components. The beamlet dictionary is a dyadicallyorganized collection of line segments, occupying a range of dyadic locations and scales, and occurring at a range of orientations. The beamlet transform of an image f(x, y) is the collection of integrals of f over each segment in the beamlet dictionary; the resulting information is stored in a beamlet pyramid. The beamlet graph is the graph structure with pixel corners as vertices and beamlets as edges; a path through this graph corresponds to a polygon in the original image. By exploiting the first four components of the beamlet framework, we can formulate beamlet-based algorithms which are able to identify and extract beamlets and chains of beamlets with special properties. In this paper we describe a four-level hierarchy of beamlet algorithms. The first level consists of simple procedures which ignore the structure of the beamlet pyramid and beamlet graph; the second level exploits only the parent-child dependence between scales; the third level incorporates collinearity and co-curvity relationships; and the fourth level allows global optimization over the full space of polygons in an image. These algorithms can be shown in practice to have suprisingly powerful and apparently unprecedented capabilities, for example in detection of very faint curves in very noisy data. We compare this framework with important antecedents in image processing (Brandt and Dym; Horn and collaborators; Götze and Druckenmiller) and in geometric measure theory (Jones; David and Semmes; and Lerman).",
"title": ""
},
{
"docid": "faa1a49f949d5ba997f4285ef2e708b2",
"text": "Appendiceal mucinous neoplasms sometimes present with peritoneal dissemination, which was previously a lethal condition with a median survival of about 3 years. Traditionally, surgical treatment consisted of debulking that was repeated until no further benefit could be achieved; systemic chemotherapy was sometimes used as a palliative option. Now, visible disease tends to be removed through visceral resections and peritonectomy. To avoid entrapment of tumour cells at operative sites and to destroy small residual mucinous tumour nodules, cytoreductive surgery is combined with intraperitoneal chemotherapy with mitomycin at 42 degrees C. Fluorouracil is then given postoperatively for 5 days. If the mucinous neoplasm is minimally invasive and cytoreduction complete, these treatments result in a 20-year survival of 70%. In the absence of a phase III study, this new combined treatment should be regarded as the standard of care for epithelial appendiceal neoplasms and pseudomyxoma peritonei syndrome.",
"title": ""
},
{
"docid": "981e88bd1f4187972f8a3d04960dd2dd",
"text": "The purpose of this study is to examine the appropriateness and effectiveness of the assistive use of robot projector based augmented reality (AR) to children’s dramatic activity. A system that employ a mobile robot mounted with a projector-camera is used to help manage children’s dramatic activity by projecting backdrops and creating a synthetic video imagery, where e.g. children’s faces is replaced with graphic characters. In this Delphi based study, a panel consist of 33 professionals include 11children education experts (college professors majoring in early childhood education), children field educators (kindergarten teachers and principals), and 11 AR and robot technology experts. The experts view the excerpts from the video taken from the actual usage situation. In the first stage of survey, we collect the panel's perspectives on applying the latest new technologies for instructing dramatic activity to children using an open ended questionnaire. Based on the results of the preliminary survey, the subsequent questionnaires (with 5 point Likert scales) are developed for the second and third in-depth surveys. In the second survey, 36 questions is categorized into 5 areas: (1) developmental and educational values, (2) impact on the teacher's role, (3) applicability and special considerations in the kindergarten, (4) external environment and required support, and (5) criteria for the selection of the story in the drama activity. The third survey mainly investigate how AR or robots can be of use in children’s dramatic activity in other ways (than as originally given) and to other educational domains. The surveys show that experts most appreciated the use of AR and robot for positive educational and developmental effects due to the children’s keen interests and in turn enhanced immersion into the dramatic activity. Consequently, the experts recommended that proper stories, scenes and technological realizations need to be selected carefully, in the light of children’s development, while lever aging on strengths of the technologies used.",
"title": ""
},
{
"docid": "26dc59c30371f1d0b2ff2e62a96f9b0f",
"text": "Hindi is very complex language with large number of phonemes and being used with various ascents in different regions in India. In this manuscript, speaker dependent and independent isolated Hindi word recognizers using the Hidden Markov Model (HMM) is implemented, under noisy environment. For this study, a set of 10 Hindi names has been chosen as a test set for which the training and testing is performed. The scheme instigated here implements the Mel Frequency Cepstral Coefficients (MFCC) in order to compute the acoustic features of the speech signal. Then, K-means algorithm is used for the codebook generation by performing clustering over the obtained feature space. Baum Welch algorithm is used for re-estimating the parameters, and finally for deciding the recognized Hindi word whose model likelihood is highest, Viterbi algorithm has been implemented; for the given HMM. This work resulted in successful recognition with 98. 6% recognition rate for speaker dependent recognition, for total of 10 speakers (6 male, 4 female) and 97. 5% for speaker independent isolated word recognizer for 10 speakers (male).",
"title": ""
},
{
"docid": "58702f835df43337692f855f35a9f903",
"text": "A dual-mode wide-band transformer based VCO is proposed. The two port impedance of the transformer based resonator is analyzed to derive the optimum primary to secondary capacitor load ratio, for robust mode selectivity and minimum power consumption. Fabricated in a 16nm FinFET technology, the design achieves 2.6× continuous tuning range spanning 7-to-18.3 GHz using a coil area of 120×150 μm2. The absence of lossy switches helps in maintaining phase noise of -112 to -100 dBc/Hz at 1 MHz offset, across the entire tuning range. The VCO consumes 3-4.4 mW and realizes power frequency tuning normalized figure of merit of 12.8 and 2.4 dB at 7 and 18.3 GHz respectively.",
"title": ""
},
{
"docid": "4d8c869c9d6e1d7ba38f56a124b84412",
"text": "We propose a novel reversible jump Markov chain Monte Carlo (MCMC) simulated an nealing algorithm to optimize radial basis function (RBF) networks. This algorithm enables us to maximize the joint posterior distribution of the network parameters and the number of basis functions. It performs a global search in the joint space of the pa rameters and number of parameters, thereby surmounting the problem of local minima. We also show that by calibrating a Bayesian model, we can obtain the classical AIC, BIC and MDL model selection criteria within a penalized likelihood framework. Finally, we show theoretically and empirically that the algorithm converges to the modes of the full posterior distribution in an efficient way.",
"title": ""
},
{
"docid": "ceb59133deb7828edaf602308cb3450a",
"text": "Abstract While there has been a great deal of interest in the modelling of non-linearities and regime shifts in economic time series, there is no clear consensus regarding the forecasting abilities of these models. In this paper we develop a general approach to predict multiple time series subject to Markovian shifts in the regime. The feasibility of the proposed forecasting techniques in empirical research is demonstrated and their forecast accuracy is evaluated.",
"title": ""
},
{
"docid": "55ffe87f74194ab3de60fea9d888d9ad",
"text": "A new priority queue implementation for the future event set problem is described in this article. The new implementation is shown experimentally to be O(1) in queue size for the priority increment distributions recently considered by Jones in his review article. It displays hold times three times shorter than splay trees for a queue size of 10,000 events. The new implementation, called a calendar queue, is a very simple structure of the multiple list variety using a novel solution to the overflow problem.",
"title": ""
}
] |
scidocsrr
|
71db204eb214c9b2070918b5ebcbec69
|
Illustrative Language Understanding: Large-Scale Visual Grounding with Image Search
|
[
{
"docid": "3355c37593ee9ef1b2ab29823ca8c1d4",
"text": "The paper overviews the 11th evaluation campaign organized by the IWSLT workshop. The 2014 evaluation offered multiple tracks on lecture transcription and translation based on the TED Talks corpus. In particular, this year IWSLT included three automatic speech recognition tracks, on English, German and Italian, five speech translation tracks, from English to French, English to German, German to English, English to Italian, and Italian to English, and five text translation track, also from English to French, English to German, German to English, English to Italian, and Italian to English. In addition to the official tracks, speech and text translation optional tracks were offered, globally involving 12 other languages: Arabic, Spanish, Portuguese (B), Hebrew, Chinese, Polish, Persian, Slovenian, Turkish, Dutch, Romanian, Russian. Overall, 21 teams participated in the evaluation, for a total of 76 primary runs submitted. Participants were also asked to submit runs on the 2013 test set (progress test set), in order to measure the progress of systems with respect to the previous year. All runs were evaluated with objective metrics, and submissions for two of the official text translation tracks were also evaluated with human post-editing.",
"title": ""
}
] |
[
{
"docid": "57104614eb2ff83893f05fbb2ff65a7d",
"text": "We have developed a novel assembly task partner robot to support workers in their task. This system, PaDY (in-time Parts/tools Delivery to You robot), delivers parts and tools to a worker by recognizing the worker's behavior in the car production line; thus, improving the efficiency of the work by reducing the worker's physical workload for picking parts and tools. For this purpose, it is necessary to plan the trajectory of the robot before the worker moves to the next location for another assembling task. First a prediction method for the worker's trajectory using a Markov model for a discretized work space into cells is proposed, then motion planning method is proposed using the predicted worker's trajectory and a mixture Gaussian distribution for each area corresponding to each procedure of the work process in the automobile coordinate system. Experimental results illustrate the validity of the proposed motion planning method.",
"title": ""
},
{
"docid": "24a1d68957279ec9120b4f2a24e9d887",
"text": "The idea of applying machine learning(ML) to solve problems in security domains is almost 3 decades old. As information and communications grow more ubiquitous and more data become available, many security risks arise as well as appetite to manage and mitigate such risks. Consequently, research on applying and designing ML algorithms and systems for security has grown fast, ranging from intrusion detection systems(IDS) and malware classification to security policy management(SPM) and information leak checking. In this paper, we systematically study the methods, algorithms, and system designs in academic publications from 2008-2015 that applied ML in security domains. 98% of the surveyed papers appeared in the 6 highest-ranked academic security conferences and 1 conference known for pioneering ML applications in security. We examine the generalized system designs, underlying assumptions, measurements, and use cases in active research. Our examinations lead to 1) a taxonomy on ML paradigms and security domains for future exploration and exploitation, and 2) an agenda detailing open and upcoming challenges. Based on our survey, we also suggest a point of view that treats security as a game theory problem instead of a batch-trained ML problem.",
"title": ""
},
{
"docid": "068e4db45998b1b99c36ffb8684c66e8",
"text": "f ( HE NUMBER OF people worldwide with heart failure (HF) is increasing at an alarming pace. In the United States lone, there are approximately 5.3 million people who have HF, ith a prevalence estimated at 10 per 1,000 in people over the age f 65.1 It is now estimated that there are 660,000 new cases of HF iagnosed every year for people over 45 years of age. In 2008, here were more than 1 million hospital admissions for HF at a ost of $34.8 billion. Currently, preventative measures, optimal edical therapy, and heart transplantation are not effectively reucing the overall morbidity and mortality of this syndrome. The American College of Cardiology/American Heart Assocition (ACC/AHA) have classified HF in 4 stages based on the rogression of the disease (Table 1).2,3 Early in the course of the isease (stages A and B), symptoms are absent or mild, but the atients are at high risk of developing symptomatic or refractory isease. As the disease progresses through stage C, ventricular unction is maintained by adrenergic stimulation, activation of enin-angiotensin-aldosterone, and other neurohumoral and cytoine systems.4,5 These compensatory mechanisms become less ffective over time, and cardiac function deteriorates to the point here patients have marked symptoms at rest (stage D). The CC/AHA-recommended therapeutic options for patients with tage D symptoms are continuous inotropic support, heart translantation, mechanical circulatory support, or hospice care. Standard HF medical therapies such as angiotensin-convertng enzyme inhibitors, -blockers, diuretics, inotropic agents, nd antiarrhythmics may relieve symptoms, but the mortality ate remains unaffected. Optimal medical therapy does not halt he progression toward stage D HF symptoms, and when this ccurs, there is a greater than 75% 2-year mortality risk, with urgical intervention being the only effective treatment. Cariac transplantation is an effective therapy for terminal HF and s associated with excellent 1-year survival (93%), 5-year surival (88%), and functional capacity.6 However, there are aproximately 2,200 donors available for as many as 100,000 atients with advanced-stage HF.7 Moreover, donor hearts are",
"title": ""
},
{
"docid": "578e8c5d2ed1fd41bd2c869eb842f305",
"text": "We are investigating the magnetic resonance imaging characteristics of magnetic nanoparticles (MNPs) that consist of an iron-oxide magnetic core coated with oleic acid (OA), then stabilized with a pluronic or tetronic block copolymer. Since pluronics and tetronics vary structurally, and also in the ratio of hydrophobic (poly[propylene oxide]) and hydrophilic (poly[ethylene oxide]) segments in the polymer chain and in molecular weight, it was hypothesized that their anchoring to the OA coating around the magnetic core could significantly influence the physical properties of MNPs, their interactions with biological environment following intravenous administration, and ability to localize to tumors. The amount of block copolymer associated with MNPs was seen to depend upon their molecular structures and influence the characteristics of MNPs. Pluronic F127-modified MNPs demonstrated sustained and enhanced contrast in the whole tumor, whereas that of Feridex IV was transient and confined to the tumor periphery. In conclusion, our pluronic F127-coated MNPs, which can also be loaded with anticancer agents for drug delivery, can be developed as an effective cancer theranostic agent, i.e. an agent with combined drug delivery and imaging properties.",
"title": ""
},
{
"docid": "cb59c880b3848b7518264f305cfea32a",
"text": "Leakage current reduction is crucial for the transformerless photovoltaic inverters. The conventional three-phase current source H6 inverter suffers from the large leakage current, which restricts its application to transformerless PV systems. In order to overcome the limitations, a new three-phase current source H7 (CH7) inverter is proposed in this paper. Only one additional Insulated Gate Bipolar Transistor is needed, but the leakage current can be effectively suppressed with a new space vector modulation (SVM). Finally, the experimental tests are carried out on the proposed CH7 inverter, and the experimental results verify the effectiveness of the proposed topology and SVM method.",
"title": ""
},
{
"docid": "69f2773d7901ac9d477604a85fb6a591",
"text": "We propose an expert-augmented actor-critic algorithm, which we evaluate on two environments with sparse rewards: Montezuma’s Revenge and a demanding maze from the ViZDoom suite. In the case of Montezuma’s Revenge, an agent trained with our method achieves very good results consistently scoring above 27,000 points (in many experiments beating the first world). With an appropriate choice of hyperparameters, our algorithm surpasses the performance of the expert data. In a number of experiments, we have observed an unreported bug in Montezuma’s Revenge which allowed the agent to score more than 800, 000 points.",
"title": ""
},
{
"docid": "92d3bb6142eafc9dc9f82ce6a766941a",
"text": "The classical Rough Set Theory (RST) always generates too many rules, making it difficult for decision makers to choose a suitable rule. In this study, we use two processes (pre process and post process) to select suitable rules and to explore the relationship among attributes. In pre process, we propose a pruning process to select suitable rules by setting up a threshold on the support object of decision rules, to thereby solve the problem of too many rules. The post process used the formal concept analysis from these suitable rules to explore the attribute relationship and the most important factors affecting decision making for choosing behaviours of personal investment portfolios. In this study, we explored the main concepts (characteristics) for the conservative portfolio: the stable job, less than 4 working years, and the gender is male; the moderate portfolio: high school education, the monthly salary between NT$30,001 (US$1000) and NT$80,000 (US$2667), the gender is male; and the aggressive portfolio: the monthly salary between NT$30,001 (US$1000) and NT$80,000 (US$2667), less than 4 working years, and a stable job. The study result successfully explored the most important factors affecting the personal investment portfolios and the suitable rules that can help decision makers. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0bdab8a45e8e2cf3c0d47cd94a0cc52c",
"text": "In this paper we study the suitability and performance of conventional time delay of arrival estimation for humanoid robots. Moving away from simulated environments, we look at the influence of real-world robot's shape on the sound source localization. We present a TDOA/GCC based sound source localization method that successfully addresses this influence by utilizing a pre-measured set of TDOAs. The measuring methodology and important aspects of the implementation are thoroughly presented. Finally, an evaluation is made with the humanoid robot Nao. The experimental results are presented and discussed. Key-Words: Microphone arrays, time delay of arrival, sound source localization, generalized cross correlation.",
"title": ""
},
{
"docid": "899422014472e5b31f3935bd3d5452fd",
"text": "The subject-oriented modelling approach [5] significally differs from the classic Petri net based approach of many business process modeling languages like EPC [9], Business Process Model and Notation (BPMN) [11], and also Yet Another Workflow Language (YAWL) [10]. In this work, we compare the two approaches by modeling a case study called \"Procure to Pay\"[3], a typical business process where some equipment for a construction site is rented and finally paid. The case study is not only modelled but also automated using the Metasonic Suite for the subject-oriented and YAWL for the Petri net based approach.",
"title": ""
},
{
"docid": "1404323d435b1b7999feda249f817f36",
"text": "The Process of Encryption and Decryption is performed by using Symmetric key cryptography and public key cryptography for Secure Communication. In this paper, we studied that how the process of Encryption and Decryption is perform in case of Symmetric key and public key cryptography using AES and DES algorithms and modified RSA algorithm.",
"title": ""
},
{
"docid": "9743b6452df2f5d5e2834c397076f7b7",
"text": "This paper deals with the application of a well-known neural network technique, multi-layer back-propagation (BP) neural network, in financial data mining. A modified neural network forecasting model is presented, and an intelligent mining system is developed. The system can forecast the buying and selling signs according to the prediction of future trends to stock market, and provide decision-making for stock investors. The simulation result of seven years to Shanghai Composite Index shows that the return achieved by this mining sys-tem is about three times as large as that achieved by the buy and hold strategy, so it is advantageous to apply neural networks to forecast financial time series, the different investors could benefit from it. Keywords—data mining, neural network, stock forecasting.",
"title": ""
},
{
"docid": "cf68e7b27b45c3e0f779471880d07846",
"text": "This paper presents a new switching strategy for pulse width modulation (PWM) power converters. Since the proposed strategy uses independent on/off switching action of the upper or lower arm according to the polarity of the current, the dead time is not needed except instant of current polarity change. Therefore, it is not necessary to compensate the dead time effect and the possibility of arm short is strongly eliminated. The current control of PWM power converters can easily adopt the proposed switching strategy by using the polarity information of the reference current instead of the real current, thus eliminating the problems that commonly arise from real current detection. In order to confirm the usefulness of the proposed switching strategy, experimental tests were done using a single-phase inverter with passive loads, a three-phase inverter for induction motor drives, a three-phase ac/dc PWM converter, a three-phase active power filter, and a class-D amplifier, the results of which are presented in this paper",
"title": ""
},
{
"docid": "d82c85205acaabab61ff720675418a20",
"text": "We introduce a new system for automatic image content removal and inpainting. Unlike traditional inpainting algorithms, which require advance knowledge of the region to be filled in, our system automatically detects the area to be removed and infilled. Region segmentation and inpainting are performed jointly in a single pass. In this way, potential segmentation errors are more naturally alleviated by the inpainting module. The system is implemented as an encoder-decoder architecture, with two decoder branches, one tasked with segmentation of the foreground region, the other with inpainting. The encoder and the two decoder branches are linked via neglect nodes, which guide the inpainting process in selecting which areas need reconstruction. The whole model is trained using a conditional GAN strategy. Comparative experiments show that our algorithm outperforms state-of-the-art inpainting techniques (which, unlike our system, do not segment the input image and thus must be aided by an external segmentation module.)",
"title": ""
},
{
"docid": "740b783d840a706992dc6977a918f1f1",
"text": "Inadequate curriculum for software engineering is considered to be one of the most common software risks. A number of solutions, on improving Software Engineering Education (SEE) have been reported in literature but there is a need to collectively present these solutions at one place. We have performed a mapping study to present a broad view of literature; published on improving the current state of SEE. Our aim is to give academicians, practitioners and researchers an international view of the current state of SEE. Our study has identified 70 primary studies that met our selection criteria, which we further classified and categorized in a well-defined Software Engineering educational framework. We found that the most researched category within the SE educational framework is Innovative Teaching Methods whereas the least amount of research was found in Student Learning and Assessment category. Our future work is to conduct a Systematic Literature Review on SEE. Keywords—Mapping Study, Software Engineering, Software Engineering Education, Literature Survey.",
"title": ""
},
{
"docid": "19f1f1156ca9464759169dd2d4005bf6",
"text": "We first consider the problem of partitioning the edges of a graph ~ into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of vertices in it. It is shown that the problem is NP-complete. We then prove the existence of a partition of small total order in a sufficiently dense graph and devise an efilcient algorithm to compute such a partition. It turns out that our algorithm exhibits a trade-off between the total order of the partition and the running time. Next, we define the notion of a compression of a graph ~ and use the result on graph partitioning to efficiently compute an optimal compression for graphs of a given size. An interesting application of the graph compression result arises from the fact that several graph algorithms can be adapted to work with the compressed rep~esentation of the input graph, thereby improving the bound on their running times particularly on dense graphs. This makes use of the trade-off result we obtain from our partitioning algorithm. The algorithms analyzed include those for matchings, vertex connectivity, edge connectivity and shortest paths. In each case, we improve upon the running times of the best-known algorithms for these problems.",
"title": ""
},
{
"docid": "790895861cb5bba78513d26c1eb30e4c",
"text": "This paper develops an integrated approach, combining quality function deployment (QFD), fuzzy set theory, and analytic hierarchy process (AHP) approach, to evaluate and select the optimal third-party logistics service providers (3PLs). In the approach, multiple evaluating criteria are derived from the requirements of company stakeholders using a series of house of quality (HOQ). The importance of evaluating criteria is prioritized with respect to the degree of achieving the stakeholder requirements using fuzzy AHP. Based on the ranked criteria, alternative 3PLs are evaluated and compared with each other using fuzzy AHP again to make an optimal selection. The effectiveness of proposed approach is demonstrated by applying it to a Hong Kong based enterprise that supplies hard disk components. The proposed integrated approach outperforms the existing approaches because the outsourcing strategy and 3PLs selection are derived from the corporate/business strategy.",
"title": ""
},
{
"docid": "b24fe8a5357af646dd2706c62a46eb25",
"text": "This paper presents an intelligent adaptive system for the integration of haptic output in graphical user interfaces. The system observes the user’s actions, extracts meaningful features, and generates a user and application specific model. When the model is sufficiently detailled, it is used to predict the widget which is most likely to be used next by the user. Upon entering this widget, two magnets in a specialized mouse are activated to stop the movement, so target acquisition becomes easier and more comfortable. Besides the intelligent control system, we will present several methods to generate haptic cues which might be integrated in multimodal user interfaces in the future.",
"title": ""
},
{
"docid": "462a0746875e35116f669b16d851f360",
"text": "We previously have applied deep autoencoder (DAE) for noise reduction and speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisyclean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations.",
"title": ""
},
{
"docid": "1d3b2a5906d7db650db042db9ececed1",
"text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.",
"title": ""
}
] |
scidocsrr
|
214d3555055146bd6209a393b734d2d6
|
Stress and multitasking in everyday college life: an empirical study of online activity
|
[
{
"docid": "ed34383cada585951e1dcc62445d08c2",
"text": "The increasing volume of e-mail and other technologically enabled communications are widely regarded as a growing source of stress in people’s lives. Yet research also suggests that new media afford people additional flexibility and control by enabling them to communicate from anywhere at any time. Using a combination of quantitative and qualitative data, this paper builds theory that unravels this apparent contradiction. As the literature would predict, we found that the more time people spent handling e-mail, the greater was their sense of being overloaded, and the more e-mail they processed, the greater their perceived ability to cope. Contrary to assumptions of prior studies, we found no evidence that time spent working mediates e-mail-related overload. Instead, e-mail’s material properties entwined with social norms and interpretations in a way that led informants to single out e-mail as a cultural symbol of the overload they experience in their lives. Moreover, by serving as a symbol, e-mail distracted people from recognizing other sources of overload in their work lives. Our study deepens our understanding of the impact of communication technologies on people’s lives and helps untangle those technologies’ seemingly contradictory influences.",
"title": ""
}
] |
[
{
"docid": "fe0587c51c4992aa03f28b18f610232f",
"text": "We show how to find sufficiently small integer solutions to a polynomial in a single variable modulo N, and to a polynomial in two variables over the integers. The methods sometimes extend to more variables. As applications: RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message, or if two messages agree over eight-ninths of their length; and we can find the factors of N=PQ if we are given the high order $\\frac{1}{4} \\log_2 N$ bits of P.",
"title": ""
},
{
"docid": "124fa48e1e842f2068a8fb55a2b8bb8e",
"text": "We present an augmented reality application for mechanics education. It utilizes a recent physics engine developed for the PC gaming market to simulate physical experiments in the domain of mechanics in real time. Students are enabled to actively build own experiments and study them in a three-dimensional virtual world. A variety of tools are provided to analyze forces, mass, paths and other properties of objects before, during and after experiments. Innovative teaching content is presented that exploits the strengths of our immersive virtual environment. PhysicsPlayground serves as an example of how current technologies can be combined to deliver a new quality in physics education.",
"title": ""
},
{
"docid": "5339554b6f753b69b5ace705af0263cd",
"text": "We explore several oversampling techniques for an imbalanced multi-label classification problem, a setting often encountered when developing models for Computer-Aided Diagnosis (CADx) systems. While most CADx systems aim to optimize classifiers for overall accuracy without considering the relative distribution of each class, we look into using synthetic sampling to increase perclass performance when predicting the degree of malignancy. Using low-level image features and a random forest classifier, we show that using synthetic oversampling techniques increases the sensitivity of the minority classes by an average of 7.22% points, with as much as a 19.88% point increase in sensitivity for a particular minority class. Furthermore, the analysis of low-level image feature distributions for the synthetic nodules reveals that these nodules can provide insights on how to preprocess image data for better classification performance or how to supplement the original datasets when more data acquisition is feasible.",
"title": ""
},
{
"docid": "8183fe0c103e2ddcab5b35549ed8629f",
"text": "The performance of Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) (i.e. Douglas-Rachford splitting on the dual problem) are sensitive to conditioning of the problem data. For a restricted class of problems that enjoy a linear rate of convergence, we show in this paper how to precondition the optimization data to optimize a bound on that rate. We also generalize the preconditioning methods to problems that do not satisfy all assumptions needed to guarantee a linear convergence. The efficiency of the proposed preconditioning is confirmed in a numerical example, where improvements of more than one order of magnitude are observed compared to when no preconditioning is used.",
"title": ""
},
{
"docid": "25a7f23c146add12bfab3f1fc497a065",
"text": "One of the greatest puzzles of human evolutionary history concerns the how and why of the transition from small-scale, ‘simple’ societies to large-scale, hierarchically complex ones. This paper reviews theoretical approaches to resolving this puzzle. Our discussion integrates ideas and concepts from evolutionary biology, anthropology, and political science. The evolutionary framework of multilevel selection suggests that complex hierarchies can arise in response to selection imposed by intergroup conflict (warfare). The logical coherency of this theory has been investigated with mathematical models, and its predictions were tested empirically by constructing a database of the largest territorial states in the world (with the focus on the preindustrial era).",
"title": ""
},
{
"docid": "f9580093dcf61a9d6905265cfb3a0d32",
"text": "The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance.",
"title": ""
},
{
"docid": "733f5029329072adf5635f0b4d0ad1cb",
"text": "We present a new approach to scalable training of deep learning machines by incremental block training with intra-block parallel optimization to leverage data parallelism and blockwise model-update filtering to stabilize learning process. By using an implementation on a distributed GPU cluster with an MPI-based HPC machine learning framework to coordinate parallel job scheduling and collective communication, we have trained successfully deep bidirectional long short-term memory (LSTM) recurrent neural networks (RNNs) and fully-connected feed-forward deep neural networks (DNNs) for large vocabulary continuous speech recognition on two benchmark tasks, namely 309-hour Switchboard-I task and 1,860-hour \"Switch-board+Fisher\" task. We achieve almost linear speedup up to 16 GPU cards on LSTM task and 64 GPU cards on DNN task, with either no degradation or improved recognition accuracy in comparison with that of running a traditional mini-batch based stochastic gradient descent training on a single GPU.",
"title": ""
},
{
"docid": "7b681d1f200c0281beb161b71e6a3604",
"text": "Data quality remains a persistent problem in practice and a challenge for research. In this study we focus on the four dimensions of data quality noted as the most important to information consumers, namely accuracy, completeness, consistency, and timeliness. These dimensions are of particular concern for operational systems, and most importantly for data warehouses, which are often used as the primary data source for analyses such as classification, a general type of data mining. However, the definitions and conceptual models of these dimensions have not been collectively considered with respect to data mining in general or classification in particular. Nor have they been considered for problem complexity. Conversely, these four dimensions of data quality have only been indirectly addressed by data mining research. Using definitions and constructs of data quality dimensions, our research evaluates the effects of both data quality and problem complexity on generated data and tests the results in a real-world case. Six different classification outcomes selected from the spectrum of classification algorithms show that data quality and problem complexity have significant main and interaction effects. From the findings of significant effects, the economics of higher data quality are evaluated for a frequent application of classification and illustrated by the real-world case.",
"title": ""
},
{
"docid": "9a6ce56536585e54d3e15613b2fa1197",
"text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.",
"title": ""
},
{
"docid": "02eccb2c0aeae243bf2023b25850890f",
"text": "In order to meet performance goals, it is widely agreed that vehicular ad hoc networks (VANETs) must rely heavily on node-to-node communication, thus allowing for malicious data traffic. At the same time, the easy access to information afforded by VANETs potentially enables the difficult security goal of data validation. We propose a general approach to evaluating the validity of VANET data. In our approach a node searches for possible explanations for the data it has collected based on the fact that malicious nodes may be present. Explanations that are consistent with the node's model of the VANET are scored and the node accepts the data as dictated by the highest scoring explanations. Our techniques for generating and scoring explanations rely on two assumptions: 1) nodes can tell \"at least some\" other nodes apart from one another and 2) a parsimony argument accurately reflects adversarial behavior in a VANET. We justify both assumptions and demonstrate our approach on specific VANETs.",
"title": ""
},
{
"docid": "c166ae2b9085cc4769438b1ca8ac8ee0",
"text": "Texts in web pages, images and videos contain important clues for information indexing and retrieval. Most existing text extraction methods depend on the language type and text appearance. In this paper, a novel and universal method of image text extraction is proposed. A coarse-to-fine text location method is implemented. Firstly, a multi-scale approach is adopted to locate texts with different font sizes. Secondly, projection profiles are used in location refinement step. Color-based k-means clustering is adopted in text segmentation. Compared to grayscale image which is used in most existing methods, color image is more suitable for segmentation based on clustering. It treats corner-points, edge-points and other points equally so that it solves the problem of handling multilingual text. It is demonstrated in experimental results that best performance is obtained when k is 3. Comparative experimental results on a large number of images show that our method is accurate and robust in various conditions.",
"title": ""
},
{
"docid": "77437d225dcc535fdbe5a7e66e15f240",
"text": "We are interested in automatic scene understanding from geometric cues. To this end, we aim to bring semantic segmentation in the loop of real-time reconstruction. Our semantic segmentation is built on a deep autoencoder stack trained exclusively on synthetic depth data generated from our novel 3D scene library, SynthCam3D. Importantly, our network is able to segment real world scenes without any noise modelling. We present encouraging preliminary results.",
"title": ""
},
{
"docid": "eb8fd891a197e5a028f1ca5eaf3988a3",
"text": "Information-centric networking (ICN) replaces the widely used host-centric networking paradigm in communication networks (e.g., Internet and mobile ad hoc networks) with an information-centric paradigm, which prioritizes the delivery of named content, oblivious of the contents’ origin. Content and client security, provenance, and identity privacy are intrinsic by design in the ICN paradigm as opposed to the current host centric paradigm where they have been instrumented as an after-thought. However, given its nascency, the ICN paradigm has several open security and privacy concerns. In this paper, we survey the existing literature in security and privacy in ICN and present open questions. More specifically, we explore three broad areas: 1) security threats; 2) privacy risks; and 3) access control enforcement mechanisms. We present the underlying principle of the existing works, discuss the drawbacks of the proposed approaches, and explore potential future research directions. In security, we review attack scenarios, such as denial of service, cache pollution, and content poisoning. In privacy, we discuss user privacy and anonymity, name and signature privacy, and content privacy. ICN’s feature of ubiquitous caching introduces a major challenge for access control enforcement that requires special attention. We review existing access control mechanisms including encryption-based, attribute-based, session-based, and proxy re-encryption-based access control schemes. We conclude the survey with lessons learned and scope for future work.",
"title": ""
},
{
"docid": "aed264522ed7ee1d3559fe4863760986",
"text": "A wireless network consisting of a large number of small sensors with low-power transceivers can be an effective tool for gathering data in a variety of environments. The data collected by each sensor is communicated through the network to a single processing center that uses all reported data to determine characteristics of the environment or detect an event. The communication or message passing process must be designed to conserve the limited energy resources of the sensors. Clustering sensors into groups, so that sensors communicate information only to clusterheads and then the clusterheads communicate the aggregated information to the processing center, may save energy. In this paper, we propose a distributed, randomized clustering algorithm to organize the sensors in a wireless sensor network into clusters. We then extend this algorithm to generate a hierarchy of clusterheads and observe that the energy savings increase with the number of levels in the hierarchy. Results in stochastic geometry are used to derive solutions for the values of parameters of our algorithm that minimize the total energy spent in the network when all sensors report data through the clusterheads to the processing center. KeywordsSensor Networks; Clustering Methods; Voronoi Tessellations; Algorithms.",
"title": ""
},
{
"docid": "d269ebe2bc6ab4dcaaac3f603037b846",
"text": "The contribution of power production by photovoltaic (PV) systems to the electricity supply is constantly increasing. An efficient use of the fluctuating solar power production will highly benefit from forecast information on the expected power production. This forecast information is necessary for the management of the electricity grids and for solar energy trading. This paper presents an approach to predict regional PV power output based on forecasts up to three days ahead provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). Focus of the paper is the description and evaluation of the approach of irradiance forecasting, which is the basis for PV power prediction. One day-ahead irradiance forecasts for single stations in Germany show a rRMSE of 36%. For regional forecasts, forecast accuracy is increasing in dependency on the size of the region. For the complete area of Germany, the rRMSE amounts to 13%. Besides the forecast accuracy, also the specification of the forecast uncertainty is an important issue for an effective application. We present and evaluate an approach to derive weather specific prediction intervals for irradiance forecasts. The accuracy of PV power prediction is investigated in a case study.",
"title": ""
},
{
"docid": "101fbbe7760c3961f11da7f1e080e5f7",
"text": "Probiotic ingestion can be recommended as a preventative approach to maintaining the balance of the intestinal microflora and thereby enhance 'well-being'. Research into the use of probiotic intervention in specific illnesses and disorders has identified certain patient populations that may benefit from the approach. Undoubtedly, probiotics will vary in their efficacy and it may not be the case that the same results occur with all species. Those that prove most efficient will likely be strains that are robust enough to survive the harsh physico-chemical conditions present in the gastrointestinal tract. This includes gastric acid, bile secretions and competition with the resident microflora. A survey of the literature indicates positive results in over fifty human trials, with prevention/treatment of infections the most frequently reported output. In theory, increased levels of probiotics may induce a 'barrier' influence against common pathogens. Mechanisms of effect are likely to include the excretion of acids (lactate, acetate), competition for nutrients and gut receptor sites, immunomodulation and the formation of specific antimicrobial agents. As such, persons susceptible to diarrhoeal infections may benefit greatly from probiotic intake. On a more chronic basis, it has been suggested that some probiotics can help maintain remission in the inflammatory conditions, ulcerative colitis and pouchitis. They have also been suggested to repress enzymes responsible for genotoxin formation. Moreover, studies have suggested that probiotics are as effective as anti-spasmodic drugs in the alleviation of irritable bowel syndrome. The approach of modulating the gut flora for improved health has much relevance for the management of those with acute and chronic gut disorders. Other target groups could include those susceptible to nosocomial infections, as well as the elderly, who have an altered microflora, with a decreased number of beneficial microbial species. For the future, it is imperative that mechanistic interactions involved in probiotic supplementation be identified. Moreover, the survival issues associated with their establishment in the competitive gut ecosystem should be addressed. Here, the use of prebiotics in association with useful probiotics may be a worthwhile approach. A prebiotic is a dietary carbohydrate selectively metabolised by probiotics. Combinations of probiotics and prebiotics are known as synbiotics.",
"title": ""
},
{
"docid": "a2082f1b4154cd11e94eff18a016e91e",
"text": "1 During the summer of 2005, I discovered that there was not a copy of my dissertation available from the library at McGill University. I was, however, able to obtain a copy of it on microfilm from another university that had initially obtained it on interlibrary loan. I am most grateful to Vicki Galbraith who typed this version from that copy, which except for some minor variations due to differences in type size and margins (plus this footnote, of course) is identical to that on the microfilm. ACKNOWLEDGEMENTS 1 The writer is grateful to Dr. J. T. McIlhone, Associate General Director in Charge of English Classes of the Montreal Catholic School Board, for his kind cooperation in making subjects available, and to the Principals and French teachers of each high school for their assistance and cooperation during the testing programs. advice on the statistical analysis. In addition, the writer would like to express his appreciation to Mr. K. Tunstall for his assistance in the difficult task of interviewing the parents of each student. Finally, the writer would like to express his gratitude to Janet W. Gardner for her invaluable assistance in all phases of the research program.",
"title": ""
},
{
"docid": "1406e39d95505da3d7ab2b5c74c2e068",
"text": "Context: During requirements engineering, prioritization is performed to grade or rank requirements in their order of importance and subsequent implementation releases. It is a major step taken in making crucial decisions so as to increase the economic value of a system. Objective: The purpose of this study is to identify and analyze existing prioritization techniques in the context of the formulated research questions. Method: Search terms with relevant keywords were used to identify primary studies that relate requirements prioritization classified under journal articles, conference papers, workshops, symposiums, book chapters and IEEE bulletins. Results: 73 Primary studies were selected from the search processes. Out of these studies; 13 were journal articles, 35 were conference papers and 8 were workshop papers. Furthermore, contributions from symposiums as well as IEEE bulletins were 2 each while the total number of book chapters amounted to 13. Conclusion: Prioritization has been significantly discussed in the requirements engineering domain. However , it was generally discovered that, existing prioritization techniques suffer from a number of limitations which includes: lack of scalability, methods of dealing with rank updates during requirements evolution, coordination among stakeholders and requirements dependency issues. Also, the applicability of existing techniques in complex and real setting has not been reported yet.",
"title": ""
},
{
"docid": "0d93bf1b3b891a625daa987652ca1964",
"text": "In this paper, we show that a continuous spectrum of randomis ation exists, in which most existing tree randomisations are only operating around the tw o ends of the spectrum. That leaves a huge part of the spectrum largely unexplored. We propose a ba se le rner VR-Tree which generates trees with variable-randomness. VR-Trees are able to span f rom the conventional deterministic trees to the complete-random trees using a probabilistic pa rameter. Using VR-Trees as the base models, we explore the entire spectrum of randomised ensemb les, together with Bagging and Random Subspace. We discover that the two halves of the spectrum have their distinct characteristics; and the understanding of which allows us to propose a new appr o ch in building better decision tree ensembles. We name this approach Coalescence, which co ales es a number of points in the random-half of the spectrum. Coalescence acts as a committe e of “ xperts” to cater for unforeseeable conditions presented in training data. Coalescence is found to perform better than any single operating point in the spectrum, without the need to tune to a specific level of randomness. In our empirical study, Coalescence ranks top among the benchm arking ensemble methods including Random Forests, Random Subspace and C5 Boosting; and only Co alescence is significantly better than Bagging and Max-Diverse Ensemble among all the methods in the comparison. Although Coalescence is not significantly better than Random Forests , we have identified conditions under which one will perform better than the other.",
"title": ""
},
{
"docid": "a972fb96613715b1d17ac69fdd86c115",
"text": "Saliency detection has been widely studied to predict human fixations, with various applications in computer vision and image processing. For saliency detection, we argue in this paper that the state-of-the-art High Efficiency Video Coding (HEVC) standard can be used to generate the useful features in compressed domain. Therefore, this paper proposes to learn the video saliency model, with regard to HEVC features. First, we establish an eye tracking database for video saliency detection, which can be downloaded from https://github.com/remega/video_database. Through the statistical analysis on our eye tracking database, we find out that human fixations tend to fall into the regions with large-valued HEVC features on splitting depth, bit allocation, and motion vector (MV). In addition, three observations are obtained with the further analysis on our eye tracking database. Accordingly, several features in HEVC domain are proposed on the basis of splitting depth, bit allocation, and MV. Next, a kind of support vector machine is learned to integrate those HEVC features together, for video saliency detection. Since almost all video data are stored in the compressed form, our method is able to avoid both the computational cost on decoding and the storage cost on raw data. More importantly, experimental results show that the proposed method is superior to other state-of-the-art saliency detection methods, either in compressed or uncompressed domain.",
"title": ""
}
] |
scidocsrr
|
75dc886d0434b1b2de77a40031a20863
|
The Corpus DIMEx100: transcription and evaluation
|
[
{
"docid": "377c8fcfa478ae3055494366f24a2bc3",
"text": "In this paper the phonetic and speech corpus DIMEx100 for Mexican Spanish is presented. We discuss both the linguistic motivation and the computational tools employed for the design, collection and transcription of the corpus. The phonetic transcription methodology is based on recent empirical studies proposing a new basic set of allophones and phonological rules for the dialect of the central part of Mexico. These phonological rules have been implemented in a visualization tool that provides the expected phonetic representation of a text, and also a default temporal alignment between the spoken corpus and its phonetic representation. The tools are also used to compute the properties of the corpus and compare these figures with previous work.",
"title": ""
}
] |
[
{
"docid": "c0707e86f711e62dc68559e227e43bcc",
"text": "How to fairly allocate divisible resources, and why computer scientists should take notice.",
"title": ""
},
{
"docid": "19a1aab60faad5a9376bb220352dc081",
"text": "BACKGROUND\nPatients with type 2 diabetes mellitus (T2DM) struggle with the management of their condition due to difficulty relating lifestyle behaviors with glycemic control. While self-monitoring of blood glucose (SMBG) has proven to be effective for those treated with insulin, it has been shown to be less beneficial for those only treated with oral medications or lifestyle modification. We hypothesized that the effective self-management of non-insulin treated T2DM requires a behavioral intervention that empowers patients with the ability to self-monitor, understand the impact of lifestyle behaviors on glycemic control, and adjust their self-care based on contextualized SMBG data.\n\n\nOBJECTIVE\nThe primary objective of this randomized controlled trial (RCT) is to determine the impact of bant2, an evidence-based, patient-centered, behavioral mobile app intervention, on the self-management of T2DM. Our second postulation is that automated feedback delivered through the mobile app will be as effective, less resource intensive, and more scalable than interventions involving additional health care provider feedback.\n\n\nMETHODS\nThis study is a 12-month, prospective, multicenter RCT in which 150 participants will be randomly assigned to one of two groups: the control group will receive current standard of care, and the intervention group will receive the mobile phone app system in addition to standard of care. The primary outcome measure is change in glycated hemoglobin A1c from baseline to 12 months.\n\n\nRESULTS\nThe first patient was enrolled on July 28, 2015, and we anticipate completing this study by September, 2018.\n\n\nCONCLUSIONS\nThis RCT is one of the first to evaluate an evidence-based mobile app that focuses on facilitating lifestyle behavior change driven by contextualized and structured SMBG. The results of this trial will provide insights regarding the usage of mobile tools and consumer-grade devices for diabetes self-care, the economic model of using incentives to motivate behavior change, and the consumption of test strips when following a rigorously structured approach for SMBG.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02370719; https://clinicaltrials.gov/ct2/show/NCT02370719 (Archived at http://www.webcitation.org/6jpyjfVRs).",
"title": ""
},
{
"docid": "aa253406afd52c172885d9bd01e6451d",
"text": "Crop yield forecasting during the growing season is useful for farming planning and management practices as well as for planning humanitarian aid in developing countries. Common approaches to yield forecast include the use of expensive manual surveys or accessible remote sensing data. Traditional remote sensing based approaches to predict crop yield consist of classical Machine Learning techniques such as Support Vector Machines and Decision Trees. More recent approaches include using deep neural network models, such as CNN and LSTM. We identify the additional gaps in the literature of existing machine learning methods as lacking of (1) standardized training protocol that specifies the optimal time frame, both in terms of years and months of each year, to be considered in the training set, (2) verified applicability to developing countries under the condition of scarce data, and (3) effective utilization of spatial features in remote sensing images. In this thesis, we first replicate the state-of-the-art approach of You et al. [1], in particular their CNN model for crop yield prediction. To tackle the first identified gap, we then perform control experiments to determine the best temporal training settings for soybean yield prediction. To probe the second gap, we further investigate whether this CNN model could be trained on source locations and then be transfered to new target locations and conclude that it is necessary to use source regions that have a similar or generalizable ecosystem to the target regions. This allows us to assess the transferability of CNN-based regression models to developing countries, where little training data is available. Additionally, we propose a novel 3D CNN model for crop yield prediction task that leverages the spatiotemporal features. We demonstrate that our 3D CNN outperforms all competing machine learning methods, shedding light on promising future directions in utilizing deep learning tools for crop yield prediction.",
"title": ""
},
{
"docid": "040e2a1bb9f8cc3717e4dca33d01b4ab",
"text": "The Commission Internationale d'Eclairage system of colorimetry is a method of measuring colours that has been standardized, and is widely used by industries involved with colour. Knowing the CIE coordinates of a colour allows it to be reproduced easily and exactly in many different media. For this reason graphics installations which utilize colour extensively ought to have the capability of knowing the CIE coordinates of displayed colours, and of displaying colours of given CIE coordinates. Such a capability requires a function which transforms video monitor gun voltages (RGB colour space) into CIE coordinates (XYZ colour space), and vice versa. The function incorporates certain monitor parameters. The purpose of this paper is to demonstrate the form that such a function takes, and to show how the necessary monitor parameters can be measured using little more than a simple light meter. Because space is limited, and because each user is likely to implement the calibration differently, few technical details are given, but principles and methods are discussed in sufficient depth to allow the full use of the system. In addition, several visual checks which can be used for quick verification of the integrity of the calibration are described.\n The paper begins with an overview of the CIE system of colorimetry. It continues with a general discussion of transformations from RGB colour space to XYZ colour space, after which a detailed step-by-step procedure for monitor calibration is presented.",
"title": ""
},
{
"docid": "aead9a7a19551a445584064a669b191a",
"text": "The purpose of this paper is to study the impact of tourism marketing mix and how it affects tourism in Jordan, and to determine which element of the marketing mix has the strongest impact on Jordanian tourism and how it will be used to better satisfy tourists. The paper will focus on foreign tourists coming to Jordan; a field survey will be used by using questionnaires to collect data. Three hundred questionnaires will be collected from actual tourists who visited Jordan, the data will be collected from selected tourism sites like (Petra, Jarash,.... etc.) and classified from one to five stars hotels in Jordan. The questionnaire will be designed in different languages (English, French and Arabic) to meet all tourists from different countries. The study established that from all the marketing mix elements, the researcher studied, product & promotion had the strongest effect on foreign tourist's satisfaction, where price and distribution were also effective significant factors. The research recommends suitable marketing strategies for all elements especially product & promotion.",
"title": ""
},
{
"docid": "920f0588d6b4fb11c4b942e4785524a4",
"text": "This paper discusses the application of Nyquist plot and two statistical indicators to analyze the frequency response of power transformers. Five cases are presented based on both simulations and measurements. These include two types of winding deformation, namely tilting and bending of conductors. Initially, damage is simulated for five degrees of severity and analyzed using two statistical indicators. The indicators are compared for their performance in terms of evaluating the severity of damage. New findings from this comparison are presented. Additionally, the benchmark limit for one of the indicators is also assessed on all cases. Finally, a methodology is proposed to estimate the severity of damage from two frequency responses using the Nyquist plot. This method is tested on three different windings.",
"title": ""
},
{
"docid": "17bf5c037090b90c01b619d821e03839",
"text": "Telling a great story often involves a deliberate alteration of emotions. In this paper, we objectively measure and analyze the narrative trajectories of stories in public speaking and their impact on subjective ratings. We conduct the analysis using the transcripts of over 2000 TED talks and estimate potential audience response using over 5 million spontaneous annotations from the viewers. We use IBM Watson Tone Analyzer to extract sentence-wise emotion, language, and social scores. Our study indicates that it is possible to predict (with AUC as high as 0.88) the subjective ratings of the audience by analyzing the narrative trajectories. Additionally, we find that some trajectories (for example, a flat trajectory of joy) correlate well with some specific ratings (e.g. \"Longwinded') assigned by the viewers. Such an association could be useful in forecasting audience responses using objective analysis.",
"title": ""
},
{
"docid": "7d00770a64f25b728f149939fd2c1e7c",
"text": "Replicated databases that use quorum-consensus algorithms to perform majority voting are prone to deadlocks. Due to the P-out-of-Q nature of quorum requests, deadlocks that arise are generalized deadlocks and are hard to detect. We present an efficient distributed algorithm to detect generalized deadlocks in replicated databases. The algorithm performs reduction of a distributed waitfor-graph (WFG) to determine the existence of a deadlock. if sufficient information to decide the reducibility of a node is not available at that node, the algorithm attempts reduction later in a lazy manner. We prove the correctness of the algorithm. The algorithm has a message complexity of 2n messages and a worst-case time complexity of 2d + 2 hops, where c is the number of edges and d is the diameter of the WFG. The algorithm is shown to perform significantly better in both time and message complexity than the best known existing algorithms. We conjecture that this is an optimal algorithm, in time and message complexity, to detect generalized deadlocks if no transaction has complete knowledge of the topology of the WFG or the system and the deadlock detection is to be carried out in a distributed manner.",
"title": ""
},
{
"docid": "0aed8ff4a76df67ea1f31542f33bf856",
"text": "This paper presents a survey of a selection of currently available simulation software for robots and unmanned vehicles. In particular, the simulators selected are reviewed for their suitability for the simulation of Autonomous Underwater Vehicles (AUVs), as well as their suitability for the simulation of multi-vehicle operations. The criteria for selection are based on the following features: sufficient physical fidelity to allow modelling of manipulators and end effectors; a programmatic interface, via scripting or middleware; modelling of optical and/or acoustic sensors; adequate documentation; previous use in academic research. A subset of the selected simulators are reviewed in greater detail; these are UWSim, MORSE, and Gazebo. This subset of simulators allow virtual sensors to be simulated, such as GPS, sonar, and multibeam sonar making them suitable for the design and simulation of navigation and mission planning algorithms. We conclude that simulation for underwater vehicles remains a niche problem, but with some additional effort researchers wishing to simulate such vehicles may do so, basing their work on existing software.",
"title": ""
},
{
"docid": "0ea5620ee29e0084b2b2e146113fa614",
"text": "The paper aims to describe the cyclical phases of the economy by using multivariate Markov switching models. The class of Markov switching models can be extended in two main directions in a multivariate framework. In the first approach, the switching dynamics are introduced by way of one common latent factor (Diebold and Rudebusch, 1996). In the second approach, introduced by Krolzig (1997), a VAR model with parameters depending on one common Markov chain is considered (MS VAR). We will extend the MS VAR approach allowing for the presence of specific Markov chain in each equation of the VAR (Multiple Markov Switching VAR models, MMS VAR). Dynamic factor models with regime switches, MS VAR and MMS VAR models allow for a multi-country or a multi-sector simultaneous analysis in the search of common phases which are represented by the states of the switching latent factor. Moreover, in the MMS VAR approach we explore the introduction of correlated Markov chains which allow us to evaluate the relationships among phases in different economies or sectors and introduce causality relationships, which can allow more parsimonious representations. We apply the MMS model in order to study the relationship between cyclical phases in the U.S. and Euro zone industrial production. Moreover, we construct a MMS model in order to explore the cyclical relationship between the Euro zone industrial production and the industrial component of the European Sentiment Index (ESI). Subject Area N. 4: “Dating, detecting and forecasting turning points”",
"title": ""
},
{
"docid": "7c3457a5ca761b501054e76965b41327",
"text": "Background learning is a pre-processing of motion detection which is a basis step of video analysis. For the static background, many previous works have already achieved good performance. However, the results on learning dynamic background are still much to be improved. To address this challenge, in this paper, a novel and practical method is proposed based on deep auto-encoder networks. Firstly, dynamic background images are extracted through a deep auto-encoder network (called Background Extraction Network) from video frames containing motion objects. Then, a dynamic background model is learned by another deep auto-encoder network (called Background Learning Network) using the extracted background images as the input. To be more flexible, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) a cascade of two deep auto-encoder networks which can deal with the separation of dynamic background and foregrounds very efficiently; 2) a method of online learning is adopted to accelerate the training of Background Extraction Network. Compared with previous algorithms, our approach obtains the best performance over six benchmark data sets. Especially, the experiments show that our algorithm can handle large variation background very well.",
"title": ""
},
{
"docid": "ce8d70f73b3bf312dc0a88aa646eea55",
"text": "1.1 Introduction Intelligent agents are a new paradigm for developing software applications. More than this, agent-based computing has been hailed as 'the next significant breakthrough in software development' (Sargent, 1992), and 'the new revolution in software' (Ovum, 1994). Currently, agents are the focus of intense interest on the part of many sub-fields of computer science and artificial intelligence. Agents are being used in an increasingly wide variety of applications, ranging from comparatively small systems such as email filters to large, open, complex, mission critical systems such as air traffic control. At first sight, it may appear that such extremely different types of system can have little in common. And yet this is not the case: in both, the key abstraction used is that of an agent. Our aim in this article is to help the reader to understand why agent technology is seen as a fundamentally important new tool for building such a wide array of systems. More precisely, our aims are five-fold: • to introduce the reader to the concept of an agent and agent-based systems, • to help the reader to recognize the domain characteristics that indicate the appropriateness of an agent-based solution, • to introduce the main application areas in which agent technology has been successfully deployed to date, • to identify the main obstacles that lie in the way of the agent system developer, and finally • to provide a guide to the remainder of this book. We begin, in this section, by introducing some basic concepts (such as, perhaps most importantly, the notion of an agent). In Section 1.2, we give some general guidelines on the types of domain for which agent technology is appropriate. In Section 1.3, we survey the key application domains for intelligent agents. In Section 1.4, we discuss some issues in agent system development, and finally, in Section 1.5, we outline the structure of this book. Before we can discuss the development of agent-based systems in detail, we have to describe what we mean by such terms as 'agent' and 'agent-based system'. Unfortunately, we immediately run into difficulties, as some key concepts in agent-based computing lack universally accepted definitions. In particular, there is no real agreement even on the core question of exactly what an agent is (see Franklin and Graesser (1996) for a discussion). However, we believe that most researchers",
"title": ""
},
{
"docid": "126aa91446d5b346f448d61bd8908401",
"text": "CONTEXT\nMany coaches, parents, and children believe that the best way to develop elite athletes is for them to participate in only 1 sport from an early age and to play it year-round. However, emerging evidence to the contrary indicates that efforts to specialize in 1 sport may reduce opportunities for all children to participate in a diverse year-round sports season and can lead to lost development of lifetime sports skills. Early sports specialization may also reduce motor skill development and ongoing participation in games and sports as a lifestyle choice. The purpose of this review is to employ the current literature to provide evidence-based alternative strategies that may help to optimize opportunities for all aspiring young athletes to maximize their health, fitness, and sports performance.\n\n\nEVIDENCE ACQUISITION\nNonsystematic review with critical appraisal of existing literature.\n\n\nSTUDY DESIGN\nClinical review.\n\n\nLEVEL OF EVIDENCE\nLevel 4.\n\n\nCONCLUSION\nBased on the current evidence, parents and educators should help provide opportunities for free unstructured play to improve motor skill development and youth should be encouraged to participate in a variety of sports during their growing years to influence the development of diverse motor skills. For those children who do choose to specialize in a single sport, periods of intense training and specialized sport activities should be closely monitored for indicators of burnout, overuse injury, or potential decrements in performance due to overtraining. Last, the evidence indicates that all youth should be involved in periodized strength and conditioning (eg, integrative neuromuscular training) to help them prepare for the demands of competitive sport participation, and youth who specialize in a single sport should plan periods of isolated and focused integrative neuromuscular training to enhance diverse motor skill development and reduce injury risk factors.\n\n\nSTRENGTH OF RECOMMENDATION TAXONOMY SORT\nB.",
"title": ""
},
{
"docid": "a86bc0970dba249e1e53f9edbad3de43",
"text": "Periodic inspection of a hanger rope is needed for the effective maintenance of suspension bridge. However, it is dangerous for human workers to access the hanger rope and not easy to check the exact state of the hanger rope. In this work we have developed a wheel-based robot that can approach the hanger rope instead of the human worker and carry the inspection device which is able to examine the inside status of the hanger rope. Meanwhile, a wheel-based cable climbing robot may be badly affected by the vibration that is generated while the robot moves on the bumpy surface of the hanger rope. The caterpillar is able to safely drive with the wide contact face on the rough terrain. Accordingly, we developed the caterpillar that can be combined with the developed cable climbing robot. In this paper, the caterpillar is introduced and its performance is compared with the wheel-based cable climbing robot.",
"title": ""
},
{
"docid": "2d609cfd11e5eec7b62f732c276c0838",
"text": "Deep neural networks show great potential as solutions to many sensing application problems, but their excessive resource demand slows down execution time, pausing a serious impediment to deployment on low-end devices. To address this challenge, recent literature focused on compressing neural network size to improve performance. We show that changing neural network size does not proportionally affect performance attributes of interest, such as execution time. Rather, extreme run-time nonlinearities exist over the network configuration space. Hence, we propose a novel framework, called FastDeepIoT, that uncovers the non-linear relation between neural network structure and execution time, then exploits that understanding to find network configurations that significantly improve the trade-off between execution time and accuracy on mobile and embedded devices. FastDeepIoT makes two key contributions. First, FastDeepIoT automatically learns an accurate and highly interpretable execution time model for deep neural networks on the target device. This is done without prior knowledge of either the hardware specifications or the detailed implementation of the used deep learning library. Second, FastDeepIoT informs a compression algorithm how to minimize execution time on the profiled device without impacting accuracy. We evaluate FastDeepIoT using three different sensing-related tasks on two mobile devices: Nexus 5 and Galaxy Nexus. FastDeepIoT further reduces the neural network execution time by 48% to 78% and energy consumption by 37% to 69% compared with the state-of-the-art compression algorithms.",
"title": ""
},
{
"docid": "6b83827500e4ea22c9fed3288d0506a7",
"text": "This study develops a high-performance stand-alone photovoltaic (PV) generation system. To make the PV generation system more flexible and expandable, the backstage power circuit is composed of a high step-up converter and a pulsewidth-modulation (PWM) inverter. In the dc-dc power conversion, the high step-up converter is introduced to improve the conversion efficiency in conventional boost converters to allow the parallel operation of low-voltage PV arrays, and to decouple and simplify the control design of the PWM inverter. Moreover, an adaptive total sliding-mode control system is designed for the voltage control of the PWM inverter to maintain a sinusoidal output voltage with lower total harmonic distortion and less variation under various output loads. In addition, an active sun tracking scheme without any light sensors is investigated to make the PV modules face the sun directly for capturing the maximum irradiation and promoting system efficiency. Experimental results are given to verify the validity and reliability of the high step-up converter, the PWM inverter control, and the active sun tracker for the high-performance stand-alone PV generation system.",
"title": ""
},
{
"docid": "e006be5c04dfbb672eaac6cd41ead75c",
"text": "Current regulators for ac inverters are commonly categorized as hysteresis, linear PI, or deadbeat predictive regulators, with a further sub-classification into stationary ABC frame and synchronous – frame implementations. Synchronous frame regulators are generally accepted to have a better performance than stationary frame regulators, as they operate on dc quantities and hence can eliminate steady-state errors. This paper establishes a theoretical connection between these two classes of regulators and proposes a new type of stationary frame regulator, the P+Resonant regulator, which achieves the same transient and steady-state performance as a synchronous frame PI regulator. The new regulator is applicable to both single-phase and three phase inverters.",
"title": ""
},
{
"docid": "af12d1794a65cb3818f1561384e069b2",
"text": " Multi-Criteria Decision Making (MCDM) methods have evolved to accommodate various types of applications. Dozens of methods have been developed, with even small variations to existing methods causing the creation of new branches of research. This paper performs a literature review of common Multi-Criteria Decision Making methods, examines the advantages and disadvantages of the identified methods, and explains how their common applications relate to their relative strengths and weaknesses. The analysis of MCDM methods performed in this paper provides a clear guide for how MCDM methods should be used in particular situations.",
"title": ""
},
{
"docid": "fe52b7bff0974115a0e326813604997b",
"text": "Deep learning is a model of machine learning loosely based on our brain. Artificial neural network has been around since the 1950s, but recent advances in hardware like graphical processing units (GPU), software like cuDNN, TensorFlow, Torch, Caffe, Theano, Deeplearning4j, etc. and new training methods have made training artificial neural networks fast and easy. In this paper, we are comparing some of the deep learning frameworks on the basis of parameters like modeling capability, interfaces available, platforms supported, parallelizing techniques supported, availability of pre-trained models, community support and documentation quality.",
"title": ""
},
{
"docid": "dd2cb96ed215b5ee050ca4c16d61e1bc",
"text": "The goal of this chapter is to give fundamental knowledge on solving multi-objective optimization problems. The focus is on the intelligent metaheuristic approaches (evolutionary algorithms or swarm-based techniques). The focus is on techniques for efficient generation of the Pareto frontier. A general formulation of MO optimization is given in this chapter, the Pareto optimality concepts introduced, and solution approaches with examples of MO problems in the power systems field are given",
"title": ""
}
] |
scidocsrr
|
fe8fcd0de803e1e871c46dae2508eb8d
|
Experiments with SVM to classify opinions in different domains
|
[
{
"docid": "095dbdc1ac804487235cdd0aeffe8233",
"text": "Sentiment analysis is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. Unfortunately, many of the potential applications of sentiment analysis are currently infeasible due to the huge number of features found in standard corpora. In this paper we systematically evaluate a range of feature selectors and feature weights with both Naı̈ve Bayes and Support Vector Machine classifiers. This includes the introduction of two new feature selection methods and three new feature weighting methods. Our results show that it is possible to maintain a state-of-the art classification accuracy of 87.15% while using less than 36% of the features.",
"title": ""
},
{
"docid": "8a7ea746acbfd004d03d4918953d283a",
"text": "Sentiment analysis is an important current research area. This paper combines rule-based classification, supervised learning andmachine learning into a new combinedmethod. Thismethod is tested onmovie reviews, product reviews and MySpace comments. The results show that a hybrid classification can improve the classification effectiveness in terms of microand macro-averaged F1. F1 is a measure that takes both the precision and recall of a classifier’s effectiveness into account. In addition, we propose a semi-automatic, complementary approach in which each classifier can contribute to other classifiers to achieve a good level of effectiveness.",
"title": ""
}
] |
[
{
"docid": "e5d107b5f81d9cd1b6d5ac58339cc427",
"text": "While one of the first steps in many NLP systems is selecting what embeddings to use, we argue that such a step is better left for neural networks to figure out by themselves. To that end, we introduce a novel, straightforward yet highly effective method for combining multiple types of word embeddings in a single model, leading to state-of-the-art performance within the same model class on a variety of tasks. We subsequently show how the technique can be used to shed new insight into the usage of word embeddings in NLP systems.",
"title": ""
},
{
"docid": "258655a00ea8acde4e2bde42376c1ead",
"text": "A main puzzle of deep networks revolves around the absence of overfitting despite large overparametrization and despite the large capacity demonstrated by zero training error on randomly labeled data. In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to linear gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian. The proposition depends on the qualitative theory of dynamical systems and is supported by numerical results. Our main propositions extend to deep nonlinear networks two properties of gradient descent for linear networks, that have been recently established (1) to be key to their generalization properties: 1. Gradient descent enforces a form of implicit regularization controlled by the number of iterations, and asymptotically converges to the minimum norm solution for appropriate initial conditions of gradient descent. This implies that there is usually an optimum early stopping that avoids overfitting of the loss. This property, valid for the square loss and many other loss functions, is relevant especially for regression. 2. For classification, the asymptotic convergence to the minimum norm solution implies convergence to the maximum margin solution which guarantees good classification error for “low noise” datasets. This property holds for loss functions such as the logistic and cross-entropy loss independently of the initial conditions. The robustness to overparametrization has suggestive implications for the robustness of the architecture of deep convolutional networks with respect to the curse of dimensionality. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. 1 ar X iv :1 80 1. 00 17 3v 2 [ cs .L G ] 1 6 Ja n 20 18",
"title": ""
},
{
"docid": "7c171e744df03df658c02e899e197bd4",
"text": "In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low-spontaneous rate fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker behaviorally. Further, the amount of wave-V latency change with masker-to-probe interval was positively correlated with the rate of change in forward masking detection thresholds. Although we cannot rule out central contributions, these findings are consistent with the hypothesis that auditory nerve fiber deafferentation occurs in humans and may predict how well individuals can hear in noisy environments.",
"title": ""
},
{
"docid": "c12d27988e70e9b3e6987ca2f0ca8bca",
"text": "In this tutorial, we introduce the basic theory behind Stega nography and Steganalysis, and present some recent algorithms and devel opm nts of these fields. We show how the existing techniques used nowadays are relate d to Image Processing and Computer Vision, point out several trendy applicati ons of Steganography and Steganalysis, and list a few great research opportunities j ust waiting to be addressed.",
"title": ""
},
{
"docid": "305f877227516eded75819bdf48ab26d",
"text": "Deep generative models have been successfully applied to many applications. However, existing works experience limitations when generating large images (the literature usually generates small images, e.g. 32× 32 or 128× 128). In this paper, we propose a novel scheme, called deep tensor adversarial generative nets (TGAN), that generates large high-quality images by exploring tensor structures. Essentially, the adversarial process of TGAN takes place in a tensor space. First, we impose tensor structures for concise image representation, which is superior in capturing the pixel proximity information and the spatial patterns of elementary objects in images, over the vectorization preprocess in existing works. Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions. More specifically, we design a tensor super-resolution process that consists of tensor dictionary learning and tensor coefficients learning. Finally, on three datasets, the proposed TGAN generates images with more realistic textures, compared with state-of-the-art adversarial autoencoders. The size of the generated images is increased by over 8.5 times, namely 374× 374 in PASCAL2.",
"title": ""
},
{
"docid": "353500d18d56c0bf6dc13627b0517f41",
"text": "In order to accelerate the learning process in high dimensional reinforcement learning problems, TD methods such as Q-learning and Sarsa are usually combined with eligibility traces. The recently introduced DQN (Deep Q-Network) algorithm, which is a combination of Q-learning with a deep neural network, has achieved good performance on several games in the Atari 2600 domain. However, the DQN training is very slow and requires too many time steps to converge. In this paper, we use the eligibility traces mechanism and propose the deep Q(λ) network algorithm. The proposed method provides faster learning in comparison with the DQN method. Empirical results on a range of games show that the deep Q(λ) network significantly reduces learning time.",
"title": ""
},
{
"docid": "62f5640954e5b731f82599fb52ea816f",
"text": "This paper presents an energy-balance control strategy for a cascaded single-phase grid-connected H-bridge multilevel inverter linking n independent photovoltaic (PV) arrays to the grid. The control scheme is based on an energy-sampled data model of the PV system and enables the design of a voltage loop linear discrete controller for each array, ensuring the stability of the system for the whole range of PV array operating conditions. The control design is adapted to phase-shifted and level-shifted carrier pulsewidth modulations to share the control action among the cascade-connected bridges in order to concurrently synthesize a multilevel waveform and to keep each of the PV arrays at its maximum power operating point. Experimental results carried out on a seven-level inverter are included to validate the proposed approach.",
"title": ""
},
{
"docid": "0d0fd1c837b5e45b83ee590017716021",
"text": "General intelligence and personality traits from the Five-Factor model were studied as predictors of academic achievement in a large sample of Estonian schoolchildren from elementary to secondary school. A total of 3618 students (1746 boys and 1872 girls) from all over Estonia attending Grades 2, 3, 4, 6, 8, 10, and 12 participated in this study. Intelligence, as measured by the Raven’s Standard Progressive Matrices, was found to be the best predictor of students’ grade point average (GPA) in all grades. Among personality traits (measured by self-reports on the Estonian Big Five Questionnaire for Children in Grades 2 to 4 and by the NEO Five Factor Inventory in Grades 6 to 12), Openness, Agreeableness, and Conscientiousness correlated positively and Neuroticism correlated negatively with GPA in almost every grade. When all measured variables were entered together into a regression model, intelligence was still the strongest predictor of GPA, being followed by Agreeableness in Grades 2 to 4 and Conscientiousness in Grades 6 to 12. Interactions between predictor variables and age accounted for only a small percentage of variance in GPA, suggesting that academic achievement relies basically on the same mechanisms through the school years. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e9cc899155bd5f88ae1a3d5b88de52af",
"text": "This article reviews research evidence showing to what extent the chronic care model can improve the management of chronic conditions (using diabetes as an example) and reduce health care costs. Thirty-two of 39 studies found that interventions based on chronic care model components improved at least 1 process or outcome measure for diabetic patients. Regarding whether chronic care model interventions can reduce costs, 18 of 27 studies concerned with 3 examples of chronic conditions (congestive heart failure, asthma, and diabetes) demonstrated reduced health care costs or lower use of health care services. Even though the chronic care model has the potential to improve care and reduce costs, several obstacles hinder its widespread adoption.",
"title": ""
},
{
"docid": "2d998d0e0966acf04dfe377cde35aafa",
"text": "This paper proposes a generalization of the multi- Bernoulli filter called the labeled multi-Bernoulli filter that outputs target tracks. Moreover, the labeled multi-Bernoulli filter does not exhibit a cardinality bias due to a more accurate update approximation compared to the multi-Bernoulli filter by exploiting the conjugate prior form for labeled Random Finite Sets. The proposed filter can be interpreted as an efficient approximation of the δ-Generalized Labeled Multi-Bernoulli filter. It inherits the advantages of the multi-Bernoulli filter in regards to particle implementation and state estimation. It also inherits advantages of the δ-Generalized Labeled Multi-Bernoulli filter in that it outputs (labeled) target tracks and achieves better performance.",
"title": ""
},
{
"docid": "855a8cfdd9d01cd65fe32d18b9be4fdf",
"text": "Interest in business intelligence and analytics education has begun to attract IS scholars’ attention. In order to discover new research questions, there is a need for conducting a literature review of extant studies on BI&A education. This study identified 44 research papers through using Google Scholar related to BI&A education. This research contributes to the field of BI&A education by (a) categorizing the existing studies on BI&A education into the key five research foci, and (b) identifying the research gaps and providing the guide for future BI&A and IS research.",
"title": ""
},
{
"docid": "d8c40ed2d2b2970412cc8404576d0c80",
"text": "In this paper an adaptive control technique combined with the so-called IDA-PBC (Interconnexion Damping Assignment, Passivity Based Control) controller is proposed for the stabilization of a class of underactuated mechanical systems, namely, the Inertia Wheel Inverted Pendulum (IWIP). It has two degrees of freedom with one actuator. The IDA-PBC stabilizes for all initial conditions (except a set of zeros measure) the upward position of the IWIP. The efficiency of this controller depends on the tuning of several gains. Motivated by this issue we propose to automatically adapt some of these gains in order to regain performance rapidly. The effectiveness of the proposed adaptive scheme is demonstrated through numerical simulations and experimental results.",
"title": ""
},
{
"docid": "1073c1f4013f6c57259502391d75d356",
"text": "A long-standing dream of Artificial Intelligence (AI) has pursued to enrich computer programs with commonsense knowledge enabling machines to reason about our world. This paper offers a new practical insight towards the automation of commonsense reasoning with first-order logic (FOL) ontologies. We propose a new black-box testing methodology of FOL SUMO-based ontologies by exploiting WordNet and its mapping into SUMO. Our proposal includes a method for the (semi-)automatic creation of a very large set of tests and a procedure for its automated evaluation by using automated theorem provers (ATPs). Applying our testing proposal, we are able to successfully evaluate a) the competency of several translations of SUMO into FOL and b) the performance of various automated ATPs. In addition, we are also able to evaluate the resulting set of tests according to different quality criteria.",
"title": ""
},
{
"docid": "1053359e8374c47d4645c5609ffafaee",
"text": "In this paper, we derive a new infinite series representation for the trivariate non-central chi-squared distribution when the underlying correlated Gaussian variables have tridiagonal form of inverse covariance matrix. We make use of the Miller's approach and the Dougall's identity to derive the joint density function. Moreover, the trivariate cumulative distribution function (cdf) and characteristic function (chf) are also derived. Finally, bivariate noncentral chi-squared distribution and some known forms are shown to be special cases of the more general distribution. However, non-central chi-squared distribution for an arbitrary covariance matrix seems intractable with the Miller's approach.",
"title": ""
},
{
"docid": "31e8d60af8a1f9576d28c4c1e0a3db86",
"text": "Management of bulk sensor data is one of the challenging problems in the development of Internet of Things (IoT) applications. High volume of sensor data induces for optimal implementation of appropriate sensor data compression technique to deal with the problem of energy-efficient transmission, storage space optimization for tiny sensor devices, and cost-effective sensor analytics. The compression performance to realize significant gain in processing high volume sensor data cannot be attained by conventional lossy compression methods, which are less likely to exploit the intrinsic unique contextual characteristics of sensor data. In this paper, we propose SensCompr, a dynamic lossy compression method specific for sensor datasets and it is easily realizable with standard compression methods. Senscompr leverages robust statistical and information theoretic techniques and does not require specific physical modeling. It is an information-centric approach that exhaustively analyzes the inherent properties of sensor data for extracting the embedded useful information content and accordingly adapts the parameters of compression scheme to maximize compression gain while optimizing information loss. Senscompr is successfully applied to compress large sets of heterogeneous real sensor datasets like ECG, EEG, smart meter, accelerometer. To the best of our knowledge, for the first time 'sensor information content'-centric dynamic compression technique is proposed and implemented particularly for IoT-applications and this method is independent to sensor data types.",
"title": ""
},
{
"docid": "c69a480600fea74dab84290e6c0e2204",
"text": "Mobile cloud computing is computing of Mobile application through cloud. As we know market of mobile phones is growing rapidly. According to IDC, the premier global market intelligence firm, the worldwide Smartphone market grew 42. 5% year over year in the first quarter of 2012. With the growing demand of Smartphone the demand for fast computation is also growing. Inspite of comparatively more processing power and storage capability of Smartphone's, they still lag behind Personal Computers in meeting processing and storage demands of high end applications like speech recognition, security software, gaming, health services etc. Mobile cloud computing is an answer to intensive processing and storage demand of real-time and high end applications. Being in nascent stage, Mobile Cloud Computing has privacy and security issues which deter the users from adopting this technology. This review paper throws light on privacy and security issues of Mobile Cloud Computing.",
"title": ""
},
{
"docid": "83f1fc22d029b3a424afcda770a5af23",
"text": "Three species of Xerolycosa: Xerolycosa nemoralis (Westring, 1861), Xerolycosa miniata (C.L. Koch, 1834) and Xerolycosa mongolica (Schenkel, 1963), occurring in the Palaearctic Region are surveyed, illustrated and redescribed. Arctosa mongolica Schenkel, 1963 is removed from synonymy with Xerolycosa nemoralis and transferred to Xerolycosa, and the new combination Xerolycosa mongolica (Schenkel, 1963) comb. n. is established. One new synonymy, Xerolycosa undulata Chen, Song et Kim, 1998 syn.n. from Heilongjiang = Xerolycosa mongolica (Schenkel, 1963), is proposed. In addition, one more new combination is established, Trochosa pelengena (Roewer, 1960) comb. n., ex Xerolycosa.",
"title": ""
},
{
"docid": "e9bd226d50c9a6633c32b9162cbd14f4",
"text": "PURPOSE\nTo report clinical features and treatment outcomes of ocular juvenile xanthogranuloma (JXG).\n\n\nDESIGN\nRetrospective case series.\n\n\nPARTICIPANTS\nThere were 32 tumors in 31 eyes of 30 patients with ocular JXG.\n\n\nMETHODS\nReview of medical records.\n\n\nMAIN OUTCOME MEASURES\nTumor control, intraocular pressure (IOP), and visual acuity.\n\n\nRESULTS\nThe mean patient age at presentation was 51 months (median, 15 months; range, 1-443 months). Eye redness (12/30, 40%) and hyphema (4/30, 13%) were the most common presenting symptoms. Cutaneous JXG was concurrently present in 3 patients (3/30, 10%), and spinal JXG was present in 1 patient (1/30, 3%). The ocular tissue affected by JXG included the iris (21/31, 68%), conjunctiva (6/31, 19%), eyelid (2/31, 6%), choroid (2/31, 6%), and orbit (1/31, 3%). Those with iris JXG presented at a median age of 13 months compared with 30 months for those with conjunctival JXG. In the iris JXG group, mean IOP was 19 mmHg (median, 18 mmHg; range, 11-30 mmHg) and hyphema was noted in 8 eyes (8/21, 38%). The iris tumor was nodular (16/21, 76%) or diffuse (5/21, 24%). Fine-needle aspiration biopsy was used in 10 cases and confirmed JXG cytologically in all cases. The iris lesion was treated with topical (18/21, 86%) and/or periocular (4/21, 19%) corticosteroids. The eyelid, conjunctiva, and orbital JXG were treated with excisional biopsy in 5 patients (5/9, 56%), topical corticosteroids in 2 patients (2/9, 22%), and observation in 2 patients (2/9, 22%). Of 28 patients with a mean follow-up of 15 months (median, 6 months; range, 1-68 months), tumor regression was achieved in all cases, without recurrence. Two patients were lost to follow-up. Upon follow-up of the iris JXG group, visual acuity was stable or improved (18/19 patients, 95%) and IOP was controlled long-term without medication (14/21 patients, 74%). No eyes were managed with enucleation.\n\n\nCONCLUSIONS\nOcular JXG preferentially affects the iris and is often isolated without cutaneous involvement. Iris JXG responds to topical or periocular corticosteroids, often with stabilization or improvement of vision and IOP.",
"title": ""
},
{
"docid": "1e7721225d84896a72f2ea790570ecbd",
"text": "We have developed a Blumlein line pulse generator which utilizes the superposition of electrical pulses launched from two individually switched pulse forming lines. By using a fast power MOSFET as a switch on each end of the Blumlein line, we were able to generate pulses with amplitudes of 1 kV across a 100-Omega load. Pulse duration and polarity can be controlled by the temporal delay in the triggering of the two switches. In addition, the use of identical switches allows us to overcome pulse distortions arising from the use of non-ideal switches in the traditional Blumlein configuration. With this pulse generator, pulses with durations between 8 and 300 ns were applied to Jurkat cells (a leukemia cell line) to investigate the pulse dependent increase in calcium levels. The development of the calcium levels in individual cells was studied by spinning-disc confocal fluorescent microscopy with the calcium indicator, fluo-4. With this fast imaging system, fluorescence changes, representing calcium mobilization, could be resolved with an exposure of 5 ms every 18 ms. For a 60-ns pulse duration, each rise in intracellular calcium was greater as the electric field strength was increased from 25 kV/cm to 100 kV/cm. Only for the highest electric field strength is the response dependent on the presence of extracellular calcium. The results complement ion-exchange mechanisms previously observed during the charging of cellular membranes, which were suggested by observations of membrane potential changes during exposure.",
"title": ""
},
{
"docid": "3348e5aaa5f610f47e11f58aa1094d4d",
"text": "Accountability has emerged as a critical concept related to data protection in cloud ecosystems. It is necessary to maintain chains of accountability across cloud ecosystems. This is to enhance the confidence in the trust that cloud actors have while operating in the cloud. This paper is concerned with accountability in the cloud. It presents a conceptual model, consisting of attributes, practices and mechanisms for accountability in the cloud. The proposed model allows us to explain, in terms of accountability attributes, cloud-mediated interactions between actors. This forms the basis for characterizing accountability relationships between cloud actors, and hence chains of accountability in cloud ecosystems.",
"title": ""
}
] |
scidocsrr
|
1dbaf7c92cceefc110c73c346c2875b2
|
3D printed soft actuators for a legged robot capable of navigating unstructured terrain
|
[
{
"docid": "6c5969169086a3b412e27f630c054c60",
"text": "Soft continuum manipulators have the advantage of being more compliant and having more degrees of freedom than rigid redundant manipulators. This attribute should allow soft manipulators to autonomously execute highly dexterous tasks. However, current approaches to motion planning, inverse kinematics, and even design limit the capacity of soft manipulators to take full advantage of their inherent compliance. We provide a computational approach to whole arm planning for a soft planar manipulator that advances the arm's end effector pose in task space while simultaneously considering the arm's entire envelope in proximity to a confined environment. The algorithm solves a series of constrained optimization problems to determine locally optimal inverse kinematics. Due to inherent limitations in modeling the kinematics of a highly compliant soft robot and the local optimality of the planner's solutions, we also rely on the increased softness of our newly designed manipulator to accomplish the whole arm task, namely the arm's ability to harmlessly collide with the environment. We detail the design and fabrication of the new modular manipulator as well as the planner's central algorithm. We experimentally validate our approach by showing that the robotic system is capable of autonomously advancing the soft arm through a pipe-like environment in order to reach distinct goal states.",
"title": ""
}
] |
[
{
"docid": "55b3fe6f2b93fd958d0857b485927bc9",
"text": "In this paper, in order to satisfy multiple closed-loop performance specifications simultaneously while improving tracking accuracy during high-speed, high-acceleration tracking motions of a 3-degree-of-freedom (3-DOF) planar parallel manipulator, we propose a new control approach, termed convex synchronized (C-S) control. This control strategy is based on the so-called convex combination method, in which the synchronized control method is adopted. Through the adoption of a set of n synchronized controllers, each of which is tuned to satisfy at least one of a set of n closed-loop performance specifications, the resultant set of n closed-loop transfer functions are combined in a convex manner, from which a C-S controller is solved algebraically. Significantly, the resultant C-S controller simultaneously satisfies all n closed-loop performance specifications. Since each synchronized controller is only required to satisfy at least one of the n closed-loop performance specifications, the convex combination method is more efficient than trial-and-error methods, where the gains of a single controller are tuned to satisfy all n closed-loop performance specifications simultaneously. Furthermore, during the design of each synchronized controller, a feedback signal, termed the synchronization error, is employed. Different from the traditional tracking errors, this synchronization error represents the degree of coordination of the active joints in the parallel manipulator based on the manipulator kinematics. As a result, the trajectory tracking accuracy of each active joint and that of the manipulator end-effector is improved. Thus, possessing both the advantages of the convex combination method and synchronized control, the proposed C-S control method can satisfy multiple closed-loop performance specifications simultaneously while improving tracking accuracy. In addition, unavoidable dynamic modeling errors are addressed through the introduction of a robust performance specification, which ensures that all performance specifications are satisfied despite allowable variations in dynamic parameters, or modeling errors. Experiments conducted on a 3-DOF P-R-R-type planar parallel manipulator demonstrate the aforementioned claims.",
"title": ""
},
{
"docid": "f4b92c53dc001d06489093ff302384b2",
"text": "Computational topology has recently known an important development toward data analysis, giving birth to the field of topological data analysis. Topological persistence, or persistent homology, appears as a fundamental tool in this field. In this paper, we study topological persistence in general metric spaces, with a statistical approach. We show that the use of persistent homology can be naturally considered in general statistical frameworks and persistence diagrams can be used as statistics with interesting convergence properties. Some numerical experiments are performed in various contexts to illustrate our results.",
"title": ""
},
{
"docid": "1f62fab7d2d88ab3c048e0c620f3842b",
"text": "Being able to locate the origin of a sound is important for our capability to interact with the environment. Humans can locate a sound source in both the horizontal and vertical plane with only two ears, using the head related transfer function HRTF, or more specifically features like interaural time difference ITD, interaural level difference ILD, and notches in the frequency spectra. In robotics notches have been left out since they are considered complex and difficult to use. As they are the main cue for humans' ability to estimate the elevation of the sound source this have to be compensated by adding more microphones or very large and asymmetric ears. In this paper, we present a novel method to extract the notches that makes it possible to accurately estimate the location of a sound source in both the horizontal and vertical plane using only two microphones and human-like ears. We suggest the use of simple spiral-shaped ears that has similar properties to the human ears and make it easy to calculate the position of the notches. Finally we show how the robot can learn its HRTF and build audiomotor maps using supervised learning and how it automatically can update its map using vision and compensate for changes in the HRTF due to changes to the ears or the environment",
"title": ""
},
{
"docid": "0af4eddf70691a7bff675d42a39f96ae",
"text": "How do we know which grammatical error correction (GEC) system is best? A number of metrics have been proposed over the years, each motivated by weaknesses of previous metrics; however, the metrics themselves have not been compared to an empirical gold standard grounded in human judgments. We conducted the first human evaluation of GEC system outputs, and show that the rankings produced by metrics such as MaxMatch and I-measure do not correlate well with this ground truth. As a step towards better metrics, we also propose GLEU, a simple variant of BLEU, modified to account for both the source and the reference, and show that it hews much more closely to human judgments.",
"title": ""
},
{
"docid": "b7dcd24f098965ff757b7ce5f183662b",
"text": "We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.",
"title": ""
},
{
"docid": "46ab85859bd3966b243db79696a236f0",
"text": "The general purpose optimization method known as Particle Swarm Optimization (PSO) has received much attention in past years, with many attempts to find the variant that performs best on a wide variety of optimization problems. The focus of past research has been with making the PSO method more complex, as this is frequently believed to increase its adaptability to other optimization problems. This study takes the opposite approach and simplifies the PSO method. To compare the efficacy of the original PSO and the simplified variant here, an easy technique is presented for efficiently tuning their behavioural parameters. The technique works by employing an overlaid meta-optimizer, which is capable of simultaneously tuning parameters with regard to multiple optimization problems, whereas previous approaches to meta-optimization have tuned behavioural parameters to work well on just a single optimization problem. It is then found that the PSO method and its simplified variant not only have comparable performance for optimizing a number of Artificial Neural Network problems, but the simplified variant appears to offer a small improvement in some cases.",
"title": ""
},
{
"docid": "9c5711c68c7a9c7a4a8fc4d9dbcf145d",
"text": "Approximate set membership data structures (ASMDSs) are ubiquitous in computing. They trade a tunable, often small, error rate ( ) for large space savings. The canonical ASMDS is the Bloom filter, which supports lookups and insertions but not deletions in its simplest form. Cuckoo filters (CFs), a recently proposed class of ASMDSs, add deletion support and often use fewer bits per item for equal . This work introduces the Morton filter (MF), a novel ASMDS that introduces several key improvements to CFs. Like CFs, MFs support lookups, insertions, and deletions, but improve their respective throughputs by 1.3× to 2.5×, 0.9× to 15.5×, and 1.3× to 1.6×. MFs achieve these improvements by (1) introducing a compressed format that permits a logically sparse filter to be stored compactly in memory, (2) leveraging succinct embedded metadata to prune unnecessary memory accesses, and (3) heavily biasing insertions to use a single hash function. With these optimizations, lookups, insertions, and deletions often only require accessing a single hardware cache line from the filter. These improvements are not at a loss in space efficiency, as MFs typically use comparable to slightly less space than CFs for the same . PVLDB Reference Format: Alex D. Breslow and Nuwan S. Jayasena. Morton Filters: Faster, Space-Efficient Cuckoo Filters via Biasing, Compression, and Decoupled Logical Sparsity. PVLDB, 11(9): 1041-1055, 2018. DOI: https://doi.org/10.14778/3213880.3213884",
"title": ""
},
{
"docid": "907b8a8a8529b09114ae60e401bec1bd",
"text": "Studies of information seeking and workplace collaboration often find that social relationships are a strong factor in determining who collaborates with whom. Social networks provide one means of visualizing existing and potential interaction in organizational settings. Groupware designers are using social networks to make systems more sensitive to social situations and guide users toward effective collaborations. Yet, the implications of embedding social networks in systems have not been systematically studied. This paper details an evaluation of two different social networks used in a system to recommend individuals for possible collaboration. The system matches people looking for expertise with individuals likely to have expertise. The effectiveness of social networks for matching individuals is evaluated and compared. One finding is that social networks embedded into systems do not match individuals' perceptions of their personal social network. This finding and others raise issues for the use of social networks in groupware. Based on the evaluation results, several design considerations are discussed.",
"title": ""
},
{
"docid": "5afdbb9c705ad379227a46958addc8f2",
"text": "In this paper we present a novel experiment to explore the impact of avatar realism on the illusion of virtual body ownership (IVBO) in immersive virtual environments, with full-body avatar embodiment and freedom of movement. We evaluated four distinct avatars (a humanoid robot, a block-man, and both male and female human adult) presenting an increasing level of anthropomorphism in their detailed compositions Our results revealed that each avatar elicited a relatively high level of illusion. However both machine-like and cartoon-like avatars elicited an equivalent IVBO, slightly superior to the human-ones. A realistic human appearance is therefore not a critical top-down factor of IVBO, and could lead to an Uncanney Valley effect.",
"title": ""
},
{
"docid": "4e7122172cb7c37416381c251b510948",
"text": "Anatomic and physiologic data are used to analyze the energy expenditure on different components of excitatory signaling in the grey matter of rodent brain. Action potentials and postsynaptic effects of glutamate are predicted to consume much of the energy (47% and 34%, respectively), with the resting potential consuming a smaller amount (13%), and glutamate recycling using only 3%. Energy usage depends strongly on action potential rate--an increase in activity of 1 action potential/cortical neuron/s will raise oxygen consumption by 145 mL/100 g grey matter/h. The energy expended on signaling is a large fraction of the total energy used by the brain; this favors the use of energy efficient neural codes and wiring patterns. Our estimates of energy usage predict the use of distributed codes, with <or=15% of neurons simultaneously active, to reduce energy consumption and allow greater computing power from a fixed number of neurons. Functional magnetic resonance imaging signals are likely to be dominated by changes in energy usage associated with synaptic currents and action potential propagation.",
"title": ""
},
{
"docid": "5db123f7b584b268f908186c67d3edcb",
"text": "From the point of view of a programmer, the robopsychology is a synonym for the activity is done by developers to implement their machine learning applications. This robopsychological approach raises some fundamental theoretical questions of machine learning. Our discussion of these questions is constrained to Turing machines. Alan Turing had given an algorithm (aka the Turing Machine) to describe algorithms. If it has been applied to describe itself then this brings us to Turing’s notion of the universal machine. In the present paper, we investigate algorithms to write algorithms. From a pedagogy point of view, this way of writing programs can be considered as a combination of learning by listening and learning by doing due to it is based on applying agent technology and machine learning. As the main result we introduce the problem of learning and then we show that it cannot easily be handled in reality therefore it is reasonable to use machine learning algorithm for learning Turing machines.",
"title": ""
},
{
"docid": "84f2072f32d2a29d372eef0f4622ddce",
"text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure",
"title": ""
},
{
"docid": "822b3d69fd4c55f45a30ff866c78c2b1",
"text": "Orthogonal frequency-division multiplexing (OFDM) modulation is a promising technique for achieving the high bit rates required for a wireless multimedia service. Without channel estimation and tracking, OFDM systems have to use differential phase-shift keying (DPSK), which has a 3-dB signalto-noise ratio (SNR) loss compared with coherent phase-shift keying (PSK). To improve the performance of OFDM systems by using coherent PSK, we investigate robust channel estimation for OFDM systems. We derive a minimum mean-square-error (MMSE) channel estimator, which makes full use of the timeand frequency-domain correlations of the frequency response of time-varying dispersive fading channels. Since the channel statistics are usually unknown, we also analyze the mismatch of the estimator-to-channel statistics and propose a robust channel estimator that is insensitive to the channel statistics. The robust channel estimator can significantly improve the performance of OFDM systems in a rapid dispersive fading channel.",
"title": ""
},
{
"docid": "a3e1eb38273f67a283063bce79b20b9d",
"text": "In this article, we examine the impact of digital screen devices, including television, on cognitive development. Although we know that young infants and toddlers are using touch screen devices, we know little about their comprehension of the content that they encounter on them. In contrast, research suggests that children begin to comprehend child-directed television starting at ∼2 years of age. The cognitive impact of these media depends on the age of the child, the kind of programming (educational programming versus programming produced for adults), the social context of viewing, as well the particular kind of interactive media (eg, computer games). For children <2 years old, television viewing has mostly negative associations, especially for language and executive function. For preschool-aged children, television viewing has been found to have both positive and negative outcomes, and a large body of research suggests that educational television has a positive impact on cognitive development. Beyond the preschool years, children mostly consume entertainment programming, and cognitive outcomes are not well explored in research. The use of computer games as well as educational computer programs can lead to gains in academically relevant content and other cognitive skills. This article concludes by identifying topics and goals for future research and provides recommendations based on current research-based knowledge.",
"title": ""
},
{
"docid": "2a81d56c89436b3379c7dec082d19b17",
"text": "We present a fast, efficient, and automatic method for extracting vessels from retinal images. The proposed method is based on the second local entropy and on the gray-level co-occurrence matrix (GLCM). The algorithm is designed to have flexibility in the definition of the blood vessel contours. Using information from the GLCM, a statistic feature is calculated to act as a threshold value. The performance of the proposed approach was evaluated in terms of its sensitivity, specificity, and accuracy. The results obtained for these metrics were 0.9648, 0.9480, and 0.9759, respectively. These results show the high performance and accuracy that the proposed method offers. Another aspect evaluated in this method is the elapsed time to carry out the segmentation. The average time required by the proposed method is 3 s for images of size 565 9 584 pixels. To assess the ability and speed of the proposed method, the experimental results are compared with those obtained using other existing methods.",
"title": ""
},
{
"docid": "a287e289fcf2d7e56069fabd90227c7a",
"text": "The mixing of audio signals has been at the foundation of audio production since the advent of electrical recording in the 1920’s, yet the mathematical and psychological bases for this activity are relatively under-studied. This paper investigates how the process of mixing music is conducted. We introduce a method of transformation from a “gainspace” to a “mix-space”, using a novel representation of the individual track gains. An experiment is conducted in order to obtain time-series data of mix engineers exploration of this space as they adjust levels within a multitrack session to create their desired mixture. It is observed that, while the exploration of the space is influenced by the initial configuration of track gains, there is agreement between individuals on the appropriate gain settings required to create a balanced mixture. Implications for the design of intelligent music production systems are discussed.",
"title": ""
},
{
"docid": "2e7bc1cc2f4be94ad0e4bce072a9f98a",
"text": "Glycosylation plays an important role in ensuring the proper structure and function of most biotherapeutic proteins. Even small changes in glycan composition, structure, or location can have a drastic impact on drug safety and efficacy. Recently, glycosylation has become the subject of increased focus as biopharmaceutical companies rush to create not only biosimilars, but also biobetters based on existing biotherapeutic proteins. Against this backdrop of ongoing biopharmaceutical innovation, updated methods for accurate and detailed analysis of protein glycosylation are critical for biopharmaceutical companies and government regulatory agencies alike. This review summarizes current methods of characterizing biopharmaceutical glycosylation, including compositional mass profiling, isomer-specific profiling and structural elucidation by MS and hyphenated techniques.",
"title": ""
},
{
"docid": "44b5fbb00aa1c4f9700cd06b59410d4c",
"text": "This paper presents insights from two case studies of Toyota Motor Corporation and its way of strategic global knowledge creation. We will show how Toyota’s knowledge creation has moved from merely transferring knowledge from Japan to subsidiaries abroad to a focus of creating knowledge in foreign markets by local staff. Toyota’s new strategy of ‘learn local, act global’ for international business development proved successful for tapping rich local knowledge bases, thus ensuring competitive edge. In fact, this strategy finally turned Toyota from simply being a global projector to a truly metanational company.",
"title": ""
},
{
"docid": "c7b7ca49ea887c25b05485e346b5b537",
"text": "I n our last article 1 we described the external features which characterize the cranial and facial structures of the cranial strains known as hyperflexion and hyperextension. To understand how these strains develop we have to examine the anatomical relations underlying all cranial patterns. Each strain represent a variation on a theme. By studying the features in common, it is possible to account for the facial and dental consequences of these variations. The key is the spheno-basilar symphysis and the displacements which can take place between the occiput and the sphenoid at that suture. In hyperflexion there is shortening of the cranium in an antero-posterior direction with a subsequent upward buckling of the spheno-basilar symphysis (Figure 1). In children, where the cartilage of the joint has not ossified, a v-shaped wedge can be seen occasionally on the lateral skull radiograph (Figure 2). Figure (3a) is of the cranial base seen from a vertex viewpoint. By leaving out the temporal bones the connection between the centrally placed spheno-basilar symphysis and the peripheral structures of the cranium can be seen more easily. Sutherland realized that the cranium could be divided into quadrants (Figure 3b) centered on the spheno-basilar symphysis and that what happens in each quadrant is directly influenced by the spheno-basilar symphysis. He noted that accompanying the vertical changes at the symphysis there are various lateral displacements. As the peripheral structures move laterally, this is known as external rotation. If they move closer to the midline, this is called internal rotation. It is not unusual to have one side of the face externally rotated and the other side internally rotated (Figure 4a). This can have a significant effect in the mouth, giving rise to asymmetries (Figure 4b). This shows a palatal view of the maxilla with the left posterior dentition externally rotated and the right buccal posterior segment internally rotated, reflecting the internal rotation of the whole right side of the face. This can be seen in hyperflexion but also other strains. With this background, it is now appropriate to examine in detail the cranial strain known as hyperflexion. As its name implies, it is brought about by an exaggeration of the flexion/ extension movement of the cranium into flexion. Rhythmic movement of the cranium continues despite the displacement into flexion, but it does so more readily into flexion than extension. As the skull is shortened in an antero-posterior plane, it is widened laterally. Figures 3a and 3b. 3a: cranial base from a vertex view (temporal bones left out). 3b: Sutherland’s quadrants imposed on cranial base. Figure 2. Lateral Skull Radiograph of Hyperflexion patient. Note V-shaped wedge at superior border of the spheno-basillar symphysis. Figure 1. Movement of Occiput and Sphenold in Hyperflexion. Reprinted from Orthopedic Gnathology, Hockel, J., Ed. 1983. With permission from Quintessence Publishing Co.",
"title": ""
}
] |
scidocsrr
|
98c6cf3806fab0c28b4e273947cd36e8
|
IoT Edge Device Based Key Frame Extraction for Face in Video Recognition
|
[
{
"docid": "b76af76207fa3ef07e8f2fbe6436dca0",
"text": "Face recognition applications for airport security and surveillance can benefit from the collaborative coupling of mobile and cloud computing as they become widely available today. This paper discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The challenge lies with how to perform task partitioning from mobile devices to cloud and distribute compute load among cloud servers (cloudlet) to minimize the response time given diverse communication latencies and server compute powers. Our preliminary simulation results show that optimal task partitioning algorithms significantly affect response time with heterogeneous latencies and compute powers. Motivated by these results, we design, implement, and validate the basic functionalities of MOCHA as a proof-of-concept, and develop algorithms that minimize the overall response time for face recognition. Our experimental results demonstrate that high-powered cloudlets are technically feasible and indeed help reduce overall processing time when face recognition applications run on mobile devices using the cloud as the backend servers.",
"title": ""
}
] |
[
{
"docid": "49c1754d0d36122538e0a1721d1afce6",
"text": "Definition of GCA (TA) . Is a chronic vasculitis of large and medium vessels. . Leads to granulomatous inflammation histologically. . Predominantly affects the cranial branches of arteries arising from the arch of the aorta. . Incidence is reported as 2.2/10 000 patient-years in the UK [1] and between 7 and 29/100 000 in population age >50 years in Europe. . Incidence rates appear higher in northern climates.",
"title": ""
},
{
"docid": "84ced44b9f9a96714929ad78ed3f8732",
"text": "The CUNY-BLENDER team participated in the following tasks in TAC-KBP2010: Regular Entity Linking, Regular Slot Filling and Surprise Slot Filling task (per:disease slot). In the TAC-KBP program, the entity linking task is considered as independent from or a pre-processing step of the slot filling task. Previous efforts on this task mainly focus on utilizing the entity surface information and the sentence/document-level contextual information of the entity. Very little work has attempted using the slot filling results as feedback features to enhance entity linking. In the KBP2010 evaluation, the CUNY-BLENDER entity linking system explored the slot filling attributes that may potentially help disambiguate entity mentions. Evaluation results show that this feedback approach can achieve 9.1% absolute improvement on micro-average accuracy over the baseline using vector space model. For Regular Slot Filling we describe two bottom-up Information Extraction style pipelines and a top-down Question Answering style pipeline. Experiment results have shown that these pipelines are complementary and can be combined in a statistical re-ranking model. In addition, we present several novel approaches to enhance these pipelines, including query expansion, Markov Logic Networks based cross-slot/cross-system reasoning. Finally, as a diagnostic test, we also measured the impact of using external knowledge base and Wikipedia text mining on Slot Filling.",
"title": ""
},
{
"docid": "393ba48bf72e535bdd8a735583fae5ba",
"text": "The PCR is used widely for the study of rRNA genes amplified from mixed microbial populations. These studies resemble quantitative applications of PCR in that the templates are mixtures of homologs and the relative abundance of amplicons is thought to provide some measure of the gene ratios in the starting mixture. Although such studies have established the presence of novel rRNA genes in many natural ecosystems, inferences about gene abundance have been limited by uncertainties about the relative efficiency of gene amplification in the PCR. To address this question, three rRNA gene standards were prepared by PCR, mixed in known proportions, and amplified a second time by using primer pairs in which one primer was labeled with a fluorescent nucleotide derivative. The PCR products were digested with restriction endonucleases, and the frequencies of genes in the products were determined by electrophoresis on an Applied Biosystems 373A automated DNA sequencer in Genescan mode. Mixtures of two templates amplified with the 519F-1406R primer pair yielded products in the predicted proportions. A second primer pair (27F-338R) resulted in strong bias towards 1:1 mixtures of genes in final products, regardless of the initial proportions of the templates. This bias was strongly dependent on the number of cycles of replication. The results fit a kinetic model in which the reannealing of genes progressively inhibits the formation of template-primer hybrids.",
"title": ""
},
{
"docid": "1a44645ee469e4bbaa978216d01f7e0d",
"text": "The growing popularity of mobile search and the advancement in voice recognition technologies have opened the door for web search users to speak their queries, rather than type them. While this kind of voice search is still in its infancy, it is gradually becoming more widespread. In this paper, we examine the logs of a commercial search engine's mobile interface, and compare the spoken queries to the typed-in queries. We place special emphasis on the semantic and syntactic characteristics of the two types of queries. %Our analysis suggests that voice queries focus more on audio-visual content and question answering, and less on social networking and adult domains. We also conduct an empirical evaluation showing that the language of voice queries is closer to natural language than typed queries. Our analysis reveals further differences between voice and text search, which have implications for the design of future voice-enabled search tools.",
"title": ""
},
{
"docid": "9feeeabb8491a06ae130c99086a9d069",
"text": "Dopamine (DA) is a key transmitter in the basal ganglia, yet DA transmission does not conform to several aspects of the classic synaptic doctrine. Axonal DA release occurs through vesicular exocytosis and is action potential- and Ca²⁺-dependent. However, in addition to axonal release, DA neurons in midbrain exhibit somatodendritic release by an incompletely understood, but apparently exocytotic, mechanism. Even in striatum, axonal release sites are controversial, with evidence for DA varicosities that lack postsynaptic specialization, and largely extrasynaptic DA receptors and transporters. Moreover, DA release is often assumed to reflect a global response to a population of activities in midbrain DA neurons, whether tonic or phasic, with precise timing and specificity of action governed by other basal ganglia circuits. This view has been reinforced by anatomical evidence showing dense axonal DA arbors throughout striatum, and a lattice network formed by DA axons and glutamatergic input from cortex and thalamus. Nonetheless, localized DA transients are seen in vivo using voltammetric methods with high spatial and temporal resolution. Mechanistic studies using similar methods in vitro have revealed local regulation of DA release by other transmitters and modulators, as well as by proteins known to be disrupted in Parkinson's disease and other movement disorders. Notably, the actions of most other striatal transmitters on DA release also do not conform to the synaptic doctrine, with the absence of direct synaptic contacts for glutamate, GABA, and acetylcholine (ACh) on striatal DA axons. Overall, the findings reviewed here indicate that DA signaling in the basal ganglia is sculpted by cooperation between the timing and pattern of DA input and those of local regulatory factors.",
"title": ""
},
{
"docid": "98ecd6eeb4e8764b3ecb0ed03105ef38",
"text": "Autonomous navigation is an important feature that allows a mobile robot to independently move from a point to another without an intervention from a human operator. Autonomous navigation within an unknown area requires the robot to explore, localize and map its surroundings. By solving a maze, the pertaining algorithms and behaviour of the robot can be studied and improved upon. This paper describes an implementation of a maze-solving robot designed to solve a maze with turning indicators. The black turning indicators tell the robot which way to turn at the intersections to reach at the centre of the maze. Detection of intersection line and turning indicators in the maze was done by using LDR sensors. Algorithm for straight-line correction was based on PI(D) controller.",
"title": ""
},
{
"docid": "0618e88e1319a66cd7f69db491f78aca",
"text": "The rich dependency structure found in the columns of real-world relational databases can be exploited to great advantage, but can also cause query optimizers---which usually assume that columns are statistically independent---to underestimate the selectivities of conjunctive predicates by orders of magnitude. We introduce CORDS, an efficient and scalable tool for automatic discovery of correlations and soft functional dependencies between columns. CORDS searches for column pairs that might have interesting and useful dependency relations by systematically enumerating candidate pairs and simultaneously pruning unpromising candidates using a flexible set of heuristics. A robust chi-squared analysis is applied to a sample of column values in order to identify correlations, and the number of distinct values in the sampled columns is analyzed to detect soft functional dependencies. CORDS can be used as a data mining tool, producing dependency graphs that are of intrinsic interest. We focus primarily on the use of CORDS in query optimization. Specifically, CORDS recommends groups of columns on which to maintain certain simple joint statistics. These \"column-group\" statistics are then used by the optimizer to avoid naive selectivity estimates based on inappropriate independence assumptions. This approach, because of its simplicity and judicious use of sampling, is relatively easy to implement in existing commercial systems, has very low overhead, and scales well to the large numbers of columns and large table sizes found in real-world databases. Experiments with a prototype implementation show that the use of CORDS in query optimization can speed up query execution times by an order of magnitude. CORDS can be used in tandem with query feedback systems such as the LEO learning optimizer, leveraging the infrastructure of such systems to correct bad selectivity estimates and ameliorating the poor performance of feedback systems during slow learning phases.",
"title": ""
},
{
"docid": "79cdb154262b6588abec7c374f6a289f",
"text": "We propose a new family of description logics (DLs), called DL-Lite, specifically tailored to capture basic ontology languages, while keeping low complexity of reasoning. Reasoning here means not only computing subsumption between concepts and checking satisfiability of the whole knowledge base, but also answering complex queries (in particular, unions of conjunctive queries) over the instance level (ABox) of the DL knowledge base. We show that, for the DLs of the DL-Lite family, the usual DL reasoning tasks are polynomial in the size of the TBox, and query answering is LogSpace in the size of the ABox (i.e., in data complexity). To the best of our knowledge, this is the first result of polynomial-time data complexity for query answering over DL knowledge bases. Notably our logics allow for a separation between TBox and ABox reasoning during query evaluation: the part of the process requiring TBox reasoning is independent of the ABox, and the part of the process requiring access to the ABox can be carried out by an SQL engine, thus taking advantage of the query optimization strategies provided by current database management systems. Since even slight extensions to the logics of the DL-Lite family make query answering at least NLogSpace in data complexity, thus ruling out the possibility of using on-the-shelf relational technology for query processing, we can conclude that the logics of the DL-Lite family are the maximal DLs supporting efficient query answering over large amounts of instances.",
"title": ""
},
{
"docid": "704f4681b724a0e4c7c10fd129f3378b",
"text": "We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical NP-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing. R esum e Nous pr esentons un sch ema totalement polynomial d'approximation pour la mise en boite de rectangles dans une boite de largeur x ee, avec hauteur mi-nimale, qui est un probleme NP-dur classique, de coupes par guillotine. L'al-gorithme donne un placement des rectangles, dont la hauteur est au plus egale a (1 +) (hauteur optimale) et a un temps d'execution polynomial en n et en 1==. Il utilise une reduction au probleme de la mise en boite fractionaire. Abstract We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical N P-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing.",
"title": ""
},
{
"docid": "12cac87e781307224db2c3edf0d217b8",
"text": "Fetal ventriculomegaly (VM) refers to the enlargement of the cerebral ventricles in utero. It is associated with the postnatal diagnosis of hydrocephalus. VM is clinically diagnosed on ultrasound and is defined as an atrial diameter greater than 10 mm. Because of the anatomic detailed seen with advanced imaging, VM is often further characterized by fetal magnetic resonance imaging (MRI). Fetal VM is a heterogeneous condition with various etiologies and a wide range of neurodevelopmental outcomes. These outcomes are heavily dependent on the presence or absence of associated anomalies and the direct cause of the ventriculomegaly rather than on the absolute degree of VM. In this review article, we discuss diagnosis, work-up, counseling, and management strategies as they relate to fetal VM. We then describe imaging-based research efforts aimed at using prenatal data to predict postnatal outcome. Finally, we review the early experience with fetal therapy such as in utero shunting, as well as the advances in prenatal diagnosis and fetal surgery that may begin to address the limitations of previous therapeutic efforts.",
"title": ""
},
{
"docid": "2512c057299a86d3e461a15b67377944",
"text": "Compressive sensing (CS) is an alternative to Shan-non/Nyquist sampling for the acquisition of sparse or compressible signals. Instead of taking periodic samples, CS measures inner products with M random vectors, where M is much smaller than the number of Nyquist-rate samples. The implications of CS are promising for many applications and enable the design of new kinds of analog-to-digital converters, imaging systems, and sensor networks. In this paper, we propose and study a wideband compressive radio receiver (WCRR) architecture that can efficiently acquire and track FM and other narrowband signals that live within a wide frequency bandwidth. The receiver operates below the Nyquist rate and has much lower complexity than either a traditional sampling system or CS recovery system. Our methods differ from most standard approaches to the problem of CS recovery in that we do not assume that the signals of interest are confined to a discrete set of frequencies, and we do not rely on traditional recovery methods such as l1-minimization. Instead, we develop a simple detection system that identifies the support of the narrowband FM signals and then applies compressive filtering techniques based on discrete prolate spheroidal sequences to cancel interference and isolate the signals. Lastly, a compressive phase-locked loop (PLL) directly recovers the FM message signals.",
"title": ""
},
{
"docid": "7d25c646a8ce7aa862fba7088b8ea915",
"text": "Neuro-dynamic programming (NDP for short) is a relatively new class of dynamic programming methods for control and sequential decision making under uncertainty. These methods have the potential of dealing with problems that for a long time were thought to be intractable due to either a large state space or the lack of an accurate model. They combine ideas from the fields of neural networks, artificial intelligence, cognitive science, simulation, and approximation theory. We will delineate the major conceptual issues, survey a number of recent developments, describe some computational experience, and address a number of open questions. We consider systems where decisions are made in stages. The outcome of each decision is not fully predictable but can be anticipated to some extent before the next decision is made. Each decision results in some immediate cost but also affects the context in which future decisions are to be made and therefore affects the cost incurred in future stages. Dynamic programming (DP for short) provides a mathematical formalization of the tradeoff between immediate and future costs. Generally, in DP formulations there is a discrete-time dynamic system whose state evolves according to given transition probabilities that depend on a decision/control u. In particular, if we are in state i and we choose decision u, we move to state j with given probability pij(u). Simultaneously with this transition, we incur a cost g(i, u, j). In comparing, however, the available decisions u, it is not enough to look at the magnitude of the cost g(i, u, j); we must also take into account how desirable the next state j is. We thus need a way to rank or rate states j. This is done by using the optimal cost (over all remaining stages) starting from state j, which is denoted by J∗(j). These costs can be shown to",
"title": ""
},
{
"docid": "e4236031c7d165a48a37171c47de1c38",
"text": "We present a discrete event simulation model reproducing the adoption of Radio Frequency Identification (RFID) technology for the optimal management of common logistics processes of a Fast Moving Consumer Goods (FMCG) warehouse. In this study, simulation is exploited as a powerful tool to replicate both the reengineered RFID logistics processes and the flows of Electronic Product Code (EPC) data generated by such processes. Moreover, a complex tool has been developed to analyze data resulting from the simulation runs, thus addressing the issue of how the flows of EPC data generated by RFID technology can be exploited to provide value-added information for optimally managing the logistics processes. Specifically, an EPCIS compliant Data Warehouse has been designed to act as EPCIS Repository and store EPC data resulting from simulation. Starting from EPC data, properly designed tools, referred to as Business Intelligence Modules, provide value-added information for processes optimization. Due to the newness of RFID adoption in the logistics context and to the lack of real case examples that can be examined, we believe that both the model and the data management system developed can be very useful to understand the practical implications of the technology and related information flow, as well as to show how to leverage EPC data for process management. Results of the study can provide a proof-of-concept to substantiate the adoption of RFID technology in the FMCG industry.",
"title": ""
},
{
"docid": "343f45efbdbf654c421b99927c076c5d",
"text": "As software engineering educators, it is important for us to realize the increasing domain-specificity of software, and incorporate these changes in our design of teaching material. Bioinformatics software is an example of immensely complex and critical scientific software and this domain provides an excellent illustration of the role of computing in the life sciences. To study bioinformatics from a software engineering standpoint, we conducted an exploratory survey of bioinformatics developers. The survey had a range of questions about people, processes and products. We learned that practices like extreme programming, requirements engineering and documentation. As software engineering educators, we realized that the survey results had important implications for the education of bioinformatics professionals. We also investigated the current status of software engineering education in bioinformatics, by examining the curricula of more than fifty bioinformatics programs and the contents of over fifteen textbooks. We observed that there was no mention of the role and importance of software engineering practices essential for creating dependable software systems. Based on our findings and existing literature we present a set of recommendations for improving software engineering education in bioinformatics.",
"title": ""
},
{
"docid": "6be88914654c736c8e1575aeb37532a3",
"text": "Coding EMRs with diagnosis and procedure codes is an indispensable task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and mis-interpretation of a patient's well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. In this paper, we present a new neural network architecture that combines ideas from few-shot learning matching networks, multi-label loss functions, and convolutional neural networks for text classification to significantly outperform other state-of-the-art models. Our evaluations are conducted using a well known deidentified EMR dataset (MIMIC) with a variety of multi-label performance measures.",
"title": ""
},
{
"docid": "1dbdd4a6d39fe973b5c6f860ec9873a2",
"text": "Meaningful facial parts can convey key cues for both facial action unit detection and expression prediction. Textured 3D face scan can provide both detailed 3D geometric shape and 2D texture appearance cues of the face which are beneficial for Facial Expression Recognition (FER). However, accurate facial parts extraction as well as their fusion are challenging tasks. In this paper, a novel system for 3D FER is designed based on accurate facial parts extraction and deep feature fusion of facial parts. Experiments are conducted on the BU-3DFE database, demonstrating the effectiveness of combing different facial parts, texture and depth cues and reporting the state-of-the-art results in comparison with all existing methods under the same setting.",
"title": ""
},
{
"docid": "b4f19048d26c0620793da5f5422a865f",
"text": "Interest in supply chain management has steadily increased since the 1980s when firms saw the benefits of collaborative relationships within and beyond their own organization. Firms are finding that they can no longer compete effectively in isolation of their suppliers or other entities in the supply chain. A number of definitions of supply chain management have been proposed in the literature and in practice. This paper defines the concept of supply chain management and discusses its historical evolution. The term does not replace supplier partnerships, nor is it a description of the logistics function. The competitive importance of linking a firm’s supply chain strategy to its overall business strategy and some practical guidelines are offered for successful supply chain management. Introduction to supply chain concepts Firms can no longer effectively compete in isolation of their suppliers and other entities in the supply chain. Interest in the concept of supply chain management has steadily increased since the 1980s when companies saw the benefits of collaborative relationships within and beyond their own organization. A number of definitions have been proposed concerning the concept of “the supply chain” and its management. This paper defines the concept of the supply chain and discusses the evolution of supply chain management. The term does not replace supplier partnerships, nor is it a description of the logistics function. Industry groups are now working together to improve the integrative processes of supply chain management and accelerate the benefits available through successful implementation. The competitive importance of linking a firm’s supply chain strategy to its overall business strategy and some practical guidelines are offered for successful supply chain management. Definition of supply chain Various definitions of a supply chain have been offered in the past several years as the concept has gained popularity. The APICS Dictionary describes the supply chain as: 1 the processes from the initial raw materials to the ultimate consumption of the finished product linking across supplieruser companies; and 2 the functions within and outside a company that enable the value chain to make products and provide services to the customer (Cox et al., 1995). Another source defines supply chain as, the network of entities through which material flows. Those entities may include suppliers, carriers, manufacturing sites, distribution centers, retailers, and customers (Lummus and Alber, 1997). The Supply Chain Council (1997) uses the definition: “The supply chain – a term increasingly used by logistics professionals – encompasses every effort involved in producing and delivering a final product, from the supplier’s supplier to the customer’s customer. Four basic processes – plan, source, make, deliver – broadly define these efforts, which include managing supply and demand, sourcing raw materials and parts, manufacturing and assembly, warehousing and inventory tracking, order entry and order management, distribution across all channels, and delivery to the customer.” Quinn (1997) defines the supply chain as “all of those activities associated with moving goods from the raw-materials stage through to the end user. This includes sourcing and procurement, production scheduling, order processing, inventory management, transportation, warehousing, and customer service. Importantly, it also embodies the information systems so necessary to monitor all of those activities.” In addition to defining the supply chain, several authors have further defined the concept of supply chain management. As defined by Ellram and Cooper (1993), supply chain management is “an integrating philosophy to manage the total flow of a distribution channel from supplier to ultimate customer”. Monczka and Morgan (1997) state that “integrated supply chain management is about going from the external customer and then managing all the processes that are needed to provide the customer with value in a horizontal way”. They believe that supply chains, not firms, compete and that those who will be the strongest competitors are those that “can provide management and leadership to the fully integrated supply chain including external customer as well as prime suppliers, their suppliers, and their suppliers’ suppliers”. From these definitions, a summary definition of the supply chain can be stated as: all the activities involved in delivering a product from raw material through to the customer including sourcing raw materials and parts, manufacturing and assembly, warehousing and inventory tracking, order entry and order management, distribution across all channels, delivery to the customer, and the information systems necessary to monitor all of these activities. Supply chain management coordinates and integrates all of these activities into a seamless process. It links all of the partners in the chain including departments",
"title": ""
},
{
"docid": "b8087b15edb4be5771aef83b1b18f723",
"text": "The success of visual telecommunication systems depends on their ability to transmit and display users' natural nonverbal behavior. While video-mediated communication (VMC) is the most widely used form of interpersonal remote interaction, avatar-mediated communication (AMC) in shared virtual environments is increasingly common. This paper presents two experiments investigating eye tracking in AMC. The first experiment compares the degree of social presence experienced in AMC and VMC during truthful and deceptive discourse. Eye tracking data (gaze, blinking, and pupil size) demonstrates that oculesic behavior is similar in both mediation types, and uncovers systematic differences between truth telling and lying. Subjective measures show users' psychological arousal to be greater in VMC than AMC. The second experiment demonstrates that observers of AMC can more accurately detect truth and deception when viewing avatars with added oculesic behavior driven by eye tracking. We discuss implications for the design of future visual telecommunication media interfaces.",
"title": ""
},
{
"docid": "d13e3aa8d5dbb412390354fc2a0d1bda",
"text": "Over the past few years, mobile marketing has generated an increasing interest among academics and practitioners. While numerous studies have provided important insights into the mobile marketing, our understanding of this topic of growing interest and importance remains deficient. Therefore, the objective of this article is to provide a comprehensive framework intended to guide research efforts focusing on mobile media as well as to aid practitioners in their quest to achieve mobile marketing success. The framework builds on the literature from mobile commerce and integrated marketing communications (IMC) and provides a broad delineation as to how mobile marketing should be integrated into the firm’s overall marketing communications strategy. It also outlines the mobile marketing from marketing communications mix (also called promotion mix) perspective and provides a comprehensive overview of divergent mobile marketing activities. The article concludes with a detailed description of mobile marketing campaign planning and implementation.",
"title": ""
}
] |
scidocsrr
|
03ace445db37807e2c9f592683978456
|
Filicide-suicide: common factors in parents who kill their children and themselves.
|
[
{
"docid": "5636a228fea893cd48cebe15f72c0bb0",
"text": "A familicide is a multiple-victim homicide incident in which the killer’s spouse and one or more children are slain. National archives of Canadian and British homicides, containing 109 familicide incidents, permit some elucidation of the characteristic and epidemiology of this crime. Familicides were almost exclusively perpetrated by men, unlike other spouse-killings and other filicides. Half the familicidal men killed themselves as well, a much higher rate of suicide than among other uxoricidal or filicidal men. De facto unions were overrepresented, compared to their prevalence in the populations-atlarge, but to a much lesser extent in familicides than in other uxoricides. Stepchildren were overrepresented as familicide victims, compared to their numbers in the populations-at-large, but to a much lesser extent than in other filicides; unlike killers of their genetic offspring, men who killed their stepchildren were rarely suicidal. An initial binary categorization of familicides as accusatory versus despondent is tentatively proposed. @ 19% wiley-Liss, Inc.",
"title": ""
}
] |
[
{
"docid": "773bd34632ce1afe27f994edf906fea3",
"text": "Crossed-guide X-band waveguide couplers with bandwidths of up to 40% and coupling factors of better than 5 dB are presented. The tight coupling and wide bandwidth are achieved by using reduced height waveguide. Design graphs and measured data are presented.",
"title": ""
},
{
"docid": "bc03f442a0785b4179f6eefb2c5d0a35",
"text": "Internet of Things (IoT)-generated data are characterized by its continuous generation, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such IoT-generated data due to the limited processing speed and the significant storage-expansion cost. Thus, big data processing technologies, which are normally based on distributed file systems, distributed database management, and parallel processing technologies, have arisen as a core technology to implement IoT-generated data repositories. In this paper, we propose a sensor-integrated radio frequency identification (RFID) data repository-implementation model using MongoDB, the most popular big data-savvy document-oriented database system now. First, we devise a data repository schema that can effectively integrate and store the heterogeneous IoT data sources, such as RFID, sensor, and GPS, by extending the event data types in electronic product code information services standard, a de facto standard for the information exchange services for RFID-based traceability. Second, we propose an effective shard key to maximize query speed and uniform data distribution over data servers. Last, through a series of experiments measuring query speed and the level of data distribution, we show that the proposed design strategy, which is based on horizontal data partitioning and a compound shard key, is effective and efficient for the IoT-generated RFID/sensor big data.",
"title": ""
},
{
"docid": "6eb4eb9b80b73bdcd039dfc8e07c3f5a",
"text": "Code duplication or copying a code fragment and then reuse by pasting with or without any modifications is a well known code smell in software maintenance. Several studies show that about 5% to 20% of a software systems can contain duplicated code, which is basically the results of copying existing code fragments and using then by pasting with or without minor modifications. One of the major shortcomings of such duplicated fragments is that if a bug is detected in a code fragment, all the other fragments similar to it should be investigated to check the possible existence of the same bug in the similar fragments. Refactoring of the duplicated code is another prime issue in software maintenance although several studies claim that refactoring of certain clones are not desirable and there is a risk of removing them. However, it is also widely agreed that clones should at least be detected. In this paper, we survey the state of the art in clone detection research. First, we describe the clone terms commonly used in the literature along with their corresponding mappings to the commonly used clone types. Second, we provide a review of the existing clone taxonomies, detection approaches and experimental evaluations of clone detection tools. Applications of clone detection research to other domains of software engineering and in the same time how other domain can assist clone detection research have also been pointed out. Finally, this paper concludes by pointing out several open problems related to clone detection research. ∗This document represents our initial findings and a further study is being carried on. Reader’s feedback is welcome at [email protected].",
"title": ""
},
{
"docid": "858f15a9fc0e014dd9ffa953ac0e70f7",
"text": "Canny (IEEE Trans. Pattern Anal. Image Proc. 8(6):679-698, 1986) suggested that an optimal edge detector should maximize both signal-to-noise ratio and localization, and he derived mathematical expressions for these criteria. Based on these criteria, he claimed that the optimal step edge detector was similar to a derivative of a gaussian. However, Canny’s work suffers from two problems. First, his derivation of localization criterion is incorrect. Here we provide a more accurate localization criterion and derive the optimal detector from it. Second, and more seriously, the Canny criteria yield an infinitely wide optimal edge detector. The width of the optimal detector can however be limited by considering the effect of the neighbouring edges in the image. If we do so, we find that the optimal step edge detector, according to the Canny criteria, is the derivative of an ISEF filter, proposed by Shen and Castan (Graph. Models Image Proc. 54:112–133, 1992). In addition, if we also consider detecting blurred (or non-sharp) gaussian edges of different widths, we find that the optimal blurred-edge detector is the above optimal step edge detector convolved with a gaussian. This implies that edge detection must be performed at multiple scales to cover all the blur widths in the image. We derive a simple scale selection procedure for edge detection, and demonstrate it in one and two dimensions.",
"title": ""
},
{
"docid": "767179a47047435dd2d49db15598c2ef",
"text": "We determine when a join/outerjoin query can be expressed unambiguously as a query graph, without an explicit specification of the order of evaluation. To do so, we first characterize the set of expression trees that implement a given join/outerjoin query graph, and investigate the existence of transformations among the various trees. Our main theorem is that a join/outerjoin query is freely reorderable if the query graph derived from it falls within a particular class, every tree that “implements” such a graph evaluates to the same result.\nThe result has applications to language design and query optimization. Languages that generate queries within such a class do not require the user to indicate priority among join operations, and hence may present a simplified syntax. And it is unnecessary to add extensive analyses to a conventional query optimizer in order to generate legal reorderings for a freely-reorderable language.",
"title": ""
},
{
"docid": "79fdfee8b42fe72a64df76e64e9358bc",
"text": "An algorithm is described to solve multiple-phase optimal control problems using a recently developed numerical method called the Gauss pseudospectral method. The algorithm is well suited for use in modern vectorized programming languages such as FORTRAN 95 and MATLAB. The algorithm discretizes the cost functional and the differential-algebraic equations in each phase of the optimal control problem. The phases are then connected using linkage conditions on the state and time. A large-scale nonlinear programming problem (NLP) arises from the discretization and the significant features of the NLP are described in detail. A particular reusable MATLAB implementation of the algorithm, called GPOPS, is applied to three classical optimal control problems to demonstrate its utility. The algorithm described in this article will provide researchers and engineers a useful software tool and a reference when it is desired to implement the Gauss pseudospectral method in other programming languages.",
"title": ""
},
{
"docid": "b499ded5996db169e65282dd8b65f289",
"text": "For complex tasks, such as manipulation and robot navigation, reinforcement learning (RL) is well-known to be difficult due to the curse of dimensionality. To overcome this complexity and making RL feasible, hierarchical RL (HRL) has been suggested. The basic idea of HRL is to divide the original task into elementary subtasks, which can be learned using RL. In this paper, we propose a HRL architecture for learning robot’s movements, e.g. robot navigation. The proposed HRL consists of two layers: (i) movement planning and (ii) movement execution. In the planning layer, e.g. generating navigation trajectories, discrete RL is employed while using movement primitives. Given the movement planning and corresponding primitives, the policy for the movement execution can be learned in the second layer using continuous RL. The proposed approach is implemented and evaluated on a mobile robot platform for a",
"title": ""
},
{
"docid": "8a325971d268cafc25845654c8a520cf",
"text": "Lokale onkologische Tumorkontrolle bei malignen Knochentumoren. Erhalt der Arm- und Handfunktion ab Ellenbogen mit der Möglichkeit, die Hand zum Mund zu führen. Vermeiden der Amputation. Stabile Aufhängung des Arms im Schulter-/Neogelenk. Primäre Knochensarkome des proximalen Humerus oder der Skapula mit Gelenkbeteiligung ohne Infiltration der Gefäßnervenstraße bei Primärmanifestation. Knochenmetastasen solider Tumoren mit großen Knochendefekten bei Primärmanifestation in palliativer/kurativer Intention oder im Revisions-/Rezidivfall nach Versagen vorhergehender Versorgungen. Tumorinfiltration der Gefäßnervenstraße. Fehlende Möglichkeit der muskulären Prothesendeckung durch ausgeprägte Tumorinfiltration der Oberarmweichteile. Transdeltoidaler Zugang unter Splitt der Deltamuskulatur. Präparation des tumortragenden Humerus unter langstreckiger Freilegung des Gefäßnervenbündels. Belassen eines onkologisch ausreichenden allseitigen Sicherheitsabstands auf dem Resektat sowohl seitens der Weichteile als auch des knöchernen Absetzungsrands. Zementierte oder zementfreie Implantation der Tumorprothese. Rekonstruktion des Gelenks und Fixation des Arms unter Verwendung eines Anbindungsschlauchs. Ggf. Bildung eines artifiziellen Gelenks bei extraartikulärer Resektion. Möglichst anatomische Refixation der initial abgesetzten Muskulatur auf dem Implantat zur Wiederherstellung der Funktion. Lagerung des Arms im z. B. Gilchrist-Verband für 4–6 Wochen postoperativ. Passive Beübung im Ellenbogengelenk nach 3–4 Wochen. Aktive Beübung der Schulter und des Ellenbogengelenks frühestens nach 4–6 Wochen. Lymphdrainage und Venenpumpe ab dem 1.–2. postoperativen Tag. The aim of the operation is local tumor control in malignant primary and secondary bone tumors of the proximal humerus. Limb salvage and preservation of function with the ability to lift the hand to the mouth. Stable suspension of the arm in the shoulder joint or the artificial joint. Primary malignant bone tumors of the proximal humerus or the scapula with joint infiltration but without involvement of the vessel/nerve bundle. Metastases of solid tumors with osteolytic defects in palliative or curative intention or after failure of primary osteosynthesis. Tumor infiltration of the vessel/nerve bundle. Massive tumor infiltration of the soft tissues without the possibility of sufficient soft tissue coverage of the implant. Transdeltoid approach with splitting of the deltoid muscle. Preparation and removal of the tumor-bearing humerus with exposure of the vessel/nerve bundle. Ensure an oncologically sufficient soft tissue and bone margin in all directions of the resection. Cementless or cemented stem implantation. Reconstruction of the joint capsule and fixation of the prosthesis using a synthetic tube. Soft tissue coverage of the prosthesis with anatomical positioning of the muscle to regain function. Immobilization of the arm/shoulder joint for 4–6 weeks in a Gilchrist bandage. Passive mobilization of the elbow joint after 3–4 weeks. Active mobilization of the shoulder and elbow joint at the earliest after 4–6 weeks.",
"title": ""
},
{
"docid": "befc74d8dc478a67c009894c3ef963d3",
"text": "In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks.",
"title": ""
},
{
"docid": "dc3495ec93462e68f606246205a8416d",
"text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.",
"title": ""
},
{
"docid": "86497dcdfd05162804091a3368176ad5",
"text": "This paper reviews the current status and implementation of battery chargers, charging power levels and infrastructure for plug-in electric vehicles and hybrids. Battery performance depends both on types and design of the batteries, and on charger characteristics and charging infrastructure. Charger systems are categorized into off-board and on-board types with unidirectional or bidirectional power flow. Unidirectional charging limits hardware requirements and simplifies interconnection issues. Bidirectional charging supports battery energy injection back to the grid. Typical onboard chargers restrict the power because of weight, space and cost constraints. They can be integrated with the electric drive for avoiding these problems. The availability of a charging infrastructure reduces on-board energy storage requirements and costs. On-board charger systems can be conductive or inductive. While conductive chargers use direct contact, inductive chargers transfer power magnetically. An off-board charger can be designed for high charging rates and is less constrained by size and weight. Level 1 (convenience), Level 2 (primary), and Level 3 (fast) power levels are discussed. These system configurations vary from country to country depending on the source and plug capacity standards. Various power level chargers and infrastructure configurations are presented, compared, and evaluated based on amount of power, charging time and location, cost, equipment, effect on the grid, and other factors.",
"title": ""
},
{
"docid": "19937d689287ba81d2d01efd9ce8f2e4",
"text": "We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.",
"title": ""
},
{
"docid": "79833f074b2e06d5c56898ca3f008c00",
"text": "Regular expressions have served as the dominant workhorse of practical information extraction for several years. However, there has been little work on reducing the manual effort involved in building high-quality, complex regular expressions for information extraction tasks. In this paper, we propose ReLIE, a novel transformation-based algorithm for learning such complex regular expressions. We evaluate the performance of our algorithm on multiple datasets and compare it against the CRF algorithm. We show that ReLIE, in addition to being an order of magnitude faster, outperforms CRF under conditions of limited training data and cross-domain data. Finally, we show how the accuracy of CRF can be improved by using features extracted by ReLIE.",
"title": ""
},
{
"docid": "14cc42c141a420cb354473a38e755091",
"text": "During software evolution, information about changes between different versions of a program is useful for a number of software engineering tasks. For example, configuration-management systems can use change information to assess possible conflicts among updates from different users. For another example, in regression testing, knowledge about which parts of a program are unchanged can help in identifying test cases that need not be rerun. For many of these tasks, a purely syntactic differencing may not provide enough information for the task to be performed effectively. This problem is especially relevant in the case of object-oriented software, for which a syntactic change can have subtle and unforeseen effects. In this paper, we present a technique for comparing object-oriented programs that identifies both differences and correspondences between two versions of a program. The technique is based on a representation that handles object-oriented features and, thus, can capture the behavior of object-oriented programs. We also present JDiff, a tool that implements the technique for Java programs. Finally, we present the results of four empirical studies, performed on many versions of two medium-sized subjects, that show the efficiency and effectiveness of the technique when used on real programs.",
"title": ""
},
{
"docid": "053b069a59b938c183c19e2938f89e66",
"text": "This paper examines the role and value of information security awareness efforts in defending against social engineering attacks. It categories the different social engineering threats and tactics used in targeting employees and the approaches to defend against such attacks. While we review these techniques, we attempt to develop a thorough understanding of human security threats, with a suitable balance between structured improvements to defend human weaknesses, and efficiently focused security training and awareness building. Finally, the paper shows that a multi-layered shield can mitigate various security risks and minimize the damage to systems and data.",
"title": ""
},
{
"docid": "da476e5448fa34e9f6fd7034dfa53576",
"text": "In this paper we propose a multi-agent approach for traffic-light control. According to this approach, our system consists of agents and their world. In this context, the world consists of cars, road networks, traffic lights, etc. Each of these agents controls all traffic lights at one road junction by an observe-think-act cycle. That is, each agent repeatedly observes the current traffic condition surrounding its junction, and then uses this information to reason with condition-action rules to determine in what traffic condition how the agent can efficiently control the traffic flows at its junction, or collaborate with neighboring agents so that they can efficiently control the traffic flows, at their junctions, in such a way that would affect the traffic flows at its junction. This research demonstrates that a rather complicated problem of traffic-light control on a large road network can be solved elegantly by our rule-based multi-agent approach.",
"title": ""
},
{
"docid": "51505087f5ae1a9f57fe04f5e9ad241e",
"text": "Microblogs have recently received widespread interest from NLP researchers. However, current tools for Japanese word segmentation and POS tagging still perform poorly on microblog texts. We developed an annotated corpus and proposed a joint model for overcoming this situation. Our annotated corpus of microblog texts enables not only training of accurate statistical models but also quantitative evaluation of their performance. Our joint model with lexical normalization handles the orthographic diversity of microblog texts. We conducted an experiment to demonstrate that the corpus and model substantially contribute to boosting accuracy.",
"title": ""
},
{
"docid": "ed3ed757804a423eef8b7394b64a971a",
"text": "This work is part of an eort aimed at developing computer-based systems for language instruction; we address the task of grading the pronunciation quality of the speech of a student of a foreign language. The automatic grading system uses SRI's Decipher continuous speech recognition system to generate phonetic segmentations. Based on these segmentations and probabilistic models we produce dierent pronunciation scores for individual or groups of sentences that can be used as predictors of the pronunciation quality. Dierent types of these machine scores can be combined to obtain a better prediction of the overall pronunciation quality. In this paper we review some of the bestperforming machine scores and discuss the application of several methods based on linear and nonlinear mapping and combination of individual machine scores to predict the pronunciation quality grade that a human expert would have given. We evaluate these methods in a database that consists of pronunciation-quality-graded speech from American students speaking French. With predictors based on spectral match and on durational characteristics, we ®nd that the combination of scores improved the prediction of the human grades and that nonlinear mapping and combination methods performed better than linear ones. Characteristics of the dierent nonlinear methods studied are discussed. Ó 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "e48f1b661691f941ea9c648c2c597b84",
"text": "Cloud Gaming is a new kind of service, which combines the successful concepts of Cloud Computing and Online Gaming. It provides the entire game experience to the users remotely from a data center. The player is no longer dependent on a specific type or quality of gaming hardware, but is able to use common devices. The end device only needs a broadband internet connection and the ability to display High Definition (HD) video. While this may reduce hardware costs for users and increase the revenue for developers by leaving out the retail chain, it also raises new challenges for service quality in terms of bandwidth and latency for the underlying network. In this paper we present the results of a subjective user study we conducted into the user-perceived quality of experience (QoE) in Cloud Gaming. We design a measurement environment, that emulates this new type of service, define tests for users to assess the QoE, derive Key Influence Factors (KIF) and influences of content and perception from our results. © 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "959b487a51ae87b2d993e6f0f6201513",
"text": "The two-wheel differential drive mobile robots, are one of the simplest and most used structures in mobile robotics applications, it consists of a chassis with two fixed and in-line with each other electric motors. This paper presents new models for differential drive mobile robots and some considerations regarding design, modeling and control solutions. The presented models are to be used to help in facing the two top challenges in developing mechatronic mobile robots system; early identifying system level problems and ensuring that all design requirements are met, as well as, to simplify and accelerate Mechatronics mobile robots design process, including proper selection, analysis, integration and verification of the overall system and sub-systems performance throughout the development process.",
"title": ""
}
] |
scidocsrr
|
d575098c34de48087416d6963bbc4207
|
Malleability of the blockchain’s entropy
|
[
{
"docid": "886c284d72a01db9bc4eb9467e14bbbb",
"text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.",
"title": ""
},
{
"docid": "ca8c40d523e0c64f139ae2a3221e8ea4",
"text": "We propose Mixcoin, a protocol to facilitate anonymous payments in Bitcoin and similar cryptocurrencies. We build on the emergent phenomenon of currency mixes, adding an accountability mechanism to expose theft. We demonstrate that incentives of mixes and clients can be aligned to ensure that rational mixes will not steal. Our scheme is efficient and fully compatible with Bitcoin. Against a passive attacker, our scheme provides an anonymity set of all other users mixing coins contemporaneously. This is an interesting new property with no clear analog in better-studied communication mixes. Against active attackers our scheme offers similar anonymity to traditional communication mixes.",
"title": ""
}
] |
[
{
"docid": "d69977627ad191c9c726c0ec7fe73c59",
"text": "Despite the progress since the first attempts of mankind to explore space, it appears that sending man in space remains challenging. While robotic systems are not yet ready to replace human presence, they provide an excellent support for astronauts during maintenance and hazardous tasks. This paper presents the development of a space qualified multi-fingered robotic hand and highlights the most interesting challenges. The design concept, the mechanical structure, the electronics architecture and the control system are presented throughout this overview paper.",
"title": ""
},
{
"docid": "0e380010be90bf3dabbc39b82da6192c",
"text": "We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q-Learning algorithm to get control policy π in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score.",
"title": ""
},
{
"docid": "8df0689ffe5c730f7a6ef6da65bec57e",
"text": "Image-based reconstruction of 3D shapes is inherently biased under the occurrence of interreflections, since the observed intensity at surface concavities consists of direct and global illumination components. This issue is commonly not considered in a Photometric Stereo (PS) framework. Under the usual assumption of only direct reflections, this corrupts the normal estimation process in concave regions and thus leads to inaccurate results. For this reason, global illumination effects need to be considered for the correct reconstruction of surfaces affected by interreflections. While there is ongoing research in the field of inverse lighting (i.e. separation of global and direct illumination components), the interreflection aspect remains oftentimes neglected in the field of 3D shape reconstruction. In this study, we present a computationally driven approach for iteratively solving that problem. Initially, we introduce a photometric stereo approach that roughly reconstructs a surface with at first unknown reflectance properties. Then, we show that the initial surface reconstruction result can be refined iteratively regarding non-distant light sources and, especially, interreflections. The benefit for the reconstruction accuracy is evaluated on real Lambertian surfaces using laser range scanner data as ground truth.",
"title": ""
},
{
"docid": "3fc3ea7bb6c5342bcbc9d046b0a2537f",
"text": "We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.",
"title": ""
},
{
"docid": "95045efce8527a68485915d8f9e2c6cf",
"text": "OBJECTIVES\nTo update the normal stretched penile length values for children younger than 5 years of age. We also evaluated the association between penile length and anthropometric measures such as body weight, height, and body mass index.\n\n\nMETHODS\nThe study was performed as a cross-section study. The stretched penile lengths of 1040 white uncircumcised male infants and children 0 to 5 years of age were measured, and the mean length for each age group and the rate of increase in penile length were calculated. The correlation between penile length and weight, height, and body mass index of the children was determined by Pearson analysis.\n\n\nRESULTS\nThe stretched penile length was 3.65 +/- 0.27 cm in full-term newborns (n = 165) and 3.95 +/- 0.35 cm in children 1 to 3 months old (n = 112), 4.26 +/- 0.40 cm in those 3.1 to 6 months old (n = 130), 4.65 +/- 0.47 cm in those 6.1 to 12 months old (n = 148), 4.82 +/- 0.44 cm in those 12.1 to 24 months old (n = 135), 5.15 +/- 0.46 cm in those 24.1 to 36 months old (n = 120), 5.58 +/- 0.47 cm in those 36.1 to 48 months old (n = 117), and 6.02 +/- 0.50 cm in those 48.1 to 60 months old (n = 113). The fastest rate of increase in penile length was seen in the first 6 months of age, with a value of 1 mm/mo. A significant correlation was found between penile length and the weight, height, and body mass index of the boys (r = 0.881, r = 0.864, and r = 0.173, respectively; P = 0.001).\n\n\nCONCLUSIONS\nThe age-related values of penile length must be known to be able to determine abnormal penile sizes and to monitor treatment of underlying diseases. Our study has provided updated reference values for penile lengths for Turkish and other white boys aged 0 to 5 years.",
"title": ""
},
{
"docid": "6cca31cabf78c56b06be08cef464d666",
"text": "Sparsity-based subspace clustering algorithms have attracted significant attention thanks to their excellent performance in practical applications. A prominent example is the sparse subspace clustering (SSC) algorithm by Elhamifar and Vidal, which performs spectral clustering based on an adjacency matrix obtained by sparsely representing each data point in terms of all the other data points via the Lasso. When the number of data points is large or the dimension of the ambient space is high, the computational complexity of SSC quickly becomes prohibitive. Dyer et al. observed that SSC-orthogonal matching pursuit (OMP) obtained by replacing the Lasso by the greedy OMP algorithm results in significantly lower computational complexity, while often yielding comparable performance. The central goal of this paper is an analytical performance characterization of SSC-OMP for noisy data. Moreover, we introduce and analyze the SSC-matching pursuit (MP) algorithm, which employs MP in lieu of OMP. Both SSC-OMP and SSC-MP are proven to succeed even when the subspaces intersect and when the data points are contaminated by severe noise. The clustering conditions we obtain for SSC-OMP and SSC-MP are similar to those for SSC and for the thresholding-based subspace clustering (TSC) algorithm due to Heckel and Bölcskei. Analytical results in combination with numerical results indicate that both SSC-OMP and SSC-MP with a data-dependent stopping criterion automatically detect the dimensions of the subspaces underlying the data. Experiments on synthetic and on real data show that SSC-MP often matches or exceeds the performance of the computationally more expensive SSC-OMP algorithm. Moreover, SSC-MP compares very favorably to SSC, TSC, and the nearest subspace neighbor algorithm, both in terms of clustering performance and running time. In addition, we find that, in contrast to SSC-OMP, the performance of SSC-MP is very robust with respect to the choice of parameters in the stopping criteria.",
"title": ""
},
{
"docid": "1d8f7705ba0dd969ed6de9e7e6a9a419",
"text": "A Mecanum-wheeled robot benefits from great omni-direction maneuverability. However it suffers from random slippage and high-speed vibration, which creates electric power safety, uncertain position errors and energy waste problems for heavy-duty tasks. A lack of Mecanum research on heavy-duty autonomous navigation demands a robot platform to conduct experiments in the future. This paper introduces AuckBot, a heavy-duty omni-directional Mecanum robot platform developed at the University of Auckland, including its hardware overview, the control system architecture and the simulation design. In particular the control system, synergistically combining the Beckhoff system as the Controller-PC to serve low-level motion execution and ROS as the Navigation-PC to accomplish highlevel intelligent navigation tasks, is developed. In addition, a computer virtual simulation based on ISG-virtuos for virtual AuckBot has been validated. The present status and future work of AuckBot are described at the end.",
"title": ""
},
{
"docid": "5eac11ef2f695f78604df1e0fa683d45",
"text": "Home automation is an integral part of modern lives that help to monitor and control the home electrical devices as well as other aspects of the digital home that is expected to be the standard for the future home. Home appliance control system enables house owner to control devices Lighting, Heating and ventilation, water pumping, gardening system remotely or from any centralized location. Automatic systems are being preferred over manual system. This paper aims at automizing any home appliances. The appliances are to be controlled automatically by the programmable Logic Controller (PLC) DELTA Electronics DVP SX10. As the functioning of the Appliances is integrated with the working of PLC, the project proves to be accurate, reliable and more efficient than the existing controllers. It is a combination of electrical, electronic and mechanical section where the software used is Ladder Logic language programming. The visualization of the current status of the home appliances is made possible with the use of SCADA screen which is interfaced to the PLC through various communication protocols. Winlog visualization software is a powerful SCADA/HMI for industrial automation, process control and supervisory monitoring. This WINLOG SCADA software has the ability to Remote application deployment and change management. Also it has Modbus and OPC Connectivity and it is equipped with 3D GUI.",
"title": ""
},
{
"docid": "348e68c9175313c6079915a8b81ceecf",
"text": "There are many advantages in using UAVs for search and rescue operations. However, detecting people from a UAV remains a challenge: the embedded detector has to be fast enough and viewpoint robust to detect people in a flexible manner from aerial views. In this paper we propose a processing pipeline to 1) reduce the search space using infrared images and to 2) detect people whatever the roll and pitch angles of the UAV's acquisition system. We tested our approach on a multimodal aerial view dataset and showed that it outperforms the Integral Channel Features (ICF) detector in this context. Moreover, this approach allows real-time compatible detection.",
"title": ""
},
{
"docid": "51c42a305039d65dc442910c8078a9aa",
"text": "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to mathematically formalize these abilities using a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which an agent can move and interact with objects it sees, we propose a “world-model” network that learns to predict the dynamic consequences of the agent’s actions. Simultaneously, we train a separate explicit “self-model” that allows the agent to track the error map of its worldmodel. It then uses the self-model to adversarially challenge the developing world-model. We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering. Moreover, the world-model that the agent learns supports improved performance on object dynamics prediction, detection, localization and recognition tasks. Taken together, our results are initial steps toward creating flexible autonomous agents that self-supervise in realistic physical environments.",
"title": ""
},
{
"docid": "18851774e598f4cb66dbc770abe4a83f",
"text": "In this paper, we propose a new approach for domain generalization by exploiting the low-rank structure from multiple latent source domains. Motivated by the recent work on exemplar-SVMs, we aim to train a set of exemplar classifiers with each classifier learnt by using only one positive training sample and all negative training samples. While positive samples may come from multiple latent domains, for the positive samples within the same latent domain, their likelihoods from each exemplar classifier are expected to be similar to each other. Based on this assumption, we formulate a new optimization problem by introducing the nuclear-norm based regularizer on the likelihood matrix to the objective function of exemplar-SVMs. We further extend Domain Adaptation Machine (DAM) to learn an optimal target classifier for domain adaptation. The comprehensive experiments for object recognition and action recognition demonstrate the effectiveness of our approach for domain generalization and domain adaptation.",
"title": ""
},
{
"docid": "232b960cc16aa558538858aefd0a7651",
"text": "This paper presents a video-based solution for real time vehicle detection and counting system, using a surveillance camera mounted on a relatively high place to acquire the traffic video stream.The two main methods applied in this system are: the adaptive background estimation and the Gaussian shadow elimination. The former allows a robust moving detection especially in complex scenes. The latter is based on color space HSV, which is able to deal with different size and intensity shadows. After these two operations, it obtains an image with moving vehicle extracted, and then operation counting is effected by a method called virtual detector.",
"title": ""
},
{
"docid": "499a37563d171054ad0b0d6b8f7007bf",
"text": "For cold-start recommendation, it is important to rapidly profile new users and generate a good initial set of recommendations through an interview process --- users should be queried adaptively in a sequential fashion, and multiple items should be offered for opinion solicitation at each trial. In this work, we propose a novel algorithm that learns to conduct the interview process guided by a decision tree with multiple questions at each split. The splits, represented as sparse weight vectors, are learned through an L_1-constrained optimization framework. The users are directed to child nodes according to the inner product of their responses and the corresponding weight vector. More importantly, to account for the variety of responses coming to a node, a linear regressor is learned within each node using all the previously obtained answers as input to predict item ratings. A user study, preliminary but first in its kind in cold-start recommendation, is conducted to explore the efficient number and format of questions being asked in a recommendation survey to minimize user cognitive efforts. Quantitative experimental validations also show that the proposed algorithm outperforms state-of-the-art approaches in terms of both the prediction accuracy and user cognitive efforts.",
"title": ""
},
{
"docid": "94eff60d3783010c0c4b4e045d18a020",
"text": "Preface 1 Preliminaries: Galois theory, algebraic number theory 2 Lecture 1. CFT of Q: classical (Mo. 19/7/10, 9:40–10:40) 4 Lecture 2. CFT of Q: via adeles (Mo. 19/7/10, 11:00–12:00) 6 Lecture 3. Local CFT, local-global compatibility (Tu. 20/7/10, 9:40–10:40) 8 Lecture 4. Global CFT, l-adic characters (Tu. 20/7/10, 11:00–12:00) 10 Appendix A. More on GLC for GL1: algebraic Hecke characters 12 Appendix B. More on GLC for GL1: algebraic Galois characters 14 Exercises 15 References 16 Index 17",
"title": ""
},
{
"docid": "ce0a855890322a98dffbb6f1a3af1c07",
"text": "Gender reassignment (which includes psychotherapy, hormonal therapy and surgery) has been demonstrated as the most effective treatment for patients affected by gender dysphoria (or gender identity disorder), in which patients do not recognize their gender (sexual identity) as matching their genetic and sexual characteristics. Gender reassignment surgery is a series of complex surgical procedures (genital and nongenital) performed for the treatment of gender dysphoria. Genital procedures performed for gender dysphoria, such as vaginoplasty, clitorolabioplasty, penectomy and orchidectomy in male-to-female transsexuals, and penile and scrotal reconstruction in female-to-male transsexuals, are the core procedures in gender reassignment surgery. Nongenital procedures, such as breast enlargement, mastectomy, facial feminization surgery, voice surgery, and other masculinization and feminization procedures complete the surgical treatment available. The World Professional Association for Transgender Health currently publishes and reviews guidelines and standards of care for patients affected by gender dysphoria, such as eligibility criteria for surgery. This article presents an overview of the genital and nongenital procedures available for both male-to-female and female-to-male gender reassignment.",
"title": ""
},
{
"docid": "247534c6b5416e4330a84e10daf2bc0c",
"text": "The aim of the present study was to determine metabolic responses, movement patterns and distance covered at running speeds corresponding to fixed blood lactate concentrations (FBLs) in young soccer players during a match play. A further aim of the study was to evaluate the relationships between FBLs, maximal oxygen consumption (VO2max) and distance covered during a game. A multistage field test was administered to 32 players to determine FBLs and VO2max. Blood lactate (LA), heart rate (HR) and rate of perceived exertion (RPE) responses were obtained from 36 players during tournament matches filmed using six fixed cameras. Images were transferred to a computer, for calibration and synchronization. In all players, values for LA and HR were higher and RPE lower during the 1(st) half compared to the 2(nd) half of the matches (p < 0.01). Players in forward positions had higher LA levels than defenders, but HR and RPE values were similar between playing positions. Total distance and distance covered in jogging, low-moderate-high intensity running and low intensity sprint were higher during the 1(st) half (p < 0.01). In the 1(st) half, players also ran longer distances at FBLs [p<0.01; average running speed at 2mmol·L(-1) (FBL2): 3.32 ± 0.31m·s(-1) and average running speed at 4mmol·L(-1) (FBL4): 3.91 ± 0.25m·s(-1)]. There was a significant difference between playing positions in distance covered at different running speeds (p < 0.05). However, when distance covered was expressed as FBLs, the players ran similar distances. In addition, relationships between FBLs and total distance covered were significant (r = 0.482 to 0.570; p < 0.01). In conclusion, these findings demonstrated that young soccer players experienced higher internal load during the 1(st) half of a game compared to the 2(nd) half. Furthermore, although movement patterns of players differed between playing positions, all players experienced a similar physiological stress throughout the game. Finally, total distance covered was associated to fixed blood lactate concentrations during play. Key pointsBased on LA, HR and RPE responses, young top soccer players experienced a higher physiological stress during the 1(st) half of the matches compared to the 2(nd) half.Movement patterns differed in accordance with the players' positions but that all players experienced a similar physiological stress during match play.Approximately one quarter of total distance was covered at speeds that exceeded the 4 mmol·L(-1) fixed LA threshold.Total distance covered was influenced by running speeds at fixed lactate concentrations in young soccer players during match play.",
"title": ""
},
{
"docid": "d7e7cdc9ac55d5af199395becfe02d73",
"text": "Text recognition in images is a research area which attempts to develop a computer system with the ability to automatically read the text from images. These days there is a huge demand in storing the information available in paper documents format in to a computer storage disk and then later reusing this information by searching process. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. But to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved in this the font characteristics of the characters in paper documents and quality of images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus there is a need of character recognition mechanisms to perform Document Image Analysis (DIA) which transforms documents in paper format to electronic format. In this paper we have discuss method for text recognition from images. The objective of this paper is to recognition of text from image for better understanding of the reader by using particular sequence of different processing module.",
"title": ""
},
{
"docid": "f4415b932387c748a30c6a8f86e0c1ea",
"text": "The broaden-and-build theory describes the form and function of a subset of positive emotions, including joy, interest, contentment and love. A key proposition is that these positive emotions broaden an individual's momentary thought-action repertoire: joy sparks the urge to play, interest sparks the urge to explore, contentment sparks the urge to savour and integrate, and love sparks a recurring cycle of each of these urges within safe, close relationships. The broadened mindsets arising from these positive emotions are contrasted to the narrowed mindsets sparked by many negative emotions (i.e. specific action tendencies, such as attack or flee). A second key proposition concerns the consequences of these broadened mindsets: by broadening an individual's momentary thought-action repertoire--whether through play, exploration or similar activities--positive emotions promote discovery of novel and creative actions, ideas and social bonds, which in turn build that individual's personal resources; ranging from physical and intellectual resources, to social and psychological resources. Importantly, these resources function as reserves that can be drawn on later to improve the odds of successful coping and survival. This chapter reviews the latest empirical evidence supporting the broaden-and-build theory and draws out implications the theory holds for optimizing health and well-being.",
"title": ""
},
{
"docid": "1dccd5745d29310e2ca1b9f302efd0bb",
"text": "Graph structure which is often used to model the relationship between the data items has drawn more and more attention. The graph datasets from many important domains have the property called scale-free. In the scale-free graphs, there exist the hubs, which have much larger degree than the average value. The hubs may cause the problems of load imbalance, poor scalability and high communication overhead when the graphs are processed in the distributed memory systems. In this paper, we design an asynchronous graph processing framework targeted for distributed memory by considering the hubs as a separate part of the vertexes, which we call it the hub-centric idea. Specifically speaking, a hub-duplicate graph partitioning method is proposed to balance the workload and reduce the communication overhead. At the same time, an efficient asynchronous state synchronization method for the duplicates is also proposed. In addition, a priority scheduling strategy is applied to further reduce the communication overhead.",
"title": ""
},
{
"docid": "9327ab4f9eba9a32211ddb39463271b1",
"text": "We investigate techniques for visualizing time series data and evaluate their effect in value comparison tasks. We compare line charts with horizon graphs - a space-efficient time series visualization technique - across a range of chart sizes, measuring the speed and accuracy of subjects' estimates of value differences between charts. We identify transition points at which reducing the chart height results in significantly differing drops in estimation accuracy across the compared chart types, and we find optimal positions in the speed-accuracy tradeoff curve at which viewers performed quickly without attendant drops in accuracy. Based on these results, we propose approaches for increasing data density that optimize graphical perception.",
"title": ""
}
] |
scidocsrr
|
aa2154918a45ccf740d744604925ba81
|
Modelling Compression with Discourse Constraints
|
[
{
"docid": "f48ce749a592d83a8fd60485b6b87ea6",
"text": "We present a system for the semantic role labeling task. The system combines a machine learning technique with an inference procedure based on integer linear programming that supports the incorporation of linguistic and structural constraints into the decision process. The system is tested on the data provided in CoNLL2004 shared task on semantic role labeling and achieves very competitive results.",
"title": ""
}
] |
[
{
"docid": "a8b99c09d71135f96a21600527dd58fa",
"text": "When a program is modified during software evolution, developers typically run the new version of the program against its existing test suite to validate that the changes made on the program did not introduce unintended side effects (i.e., regression faults). This kind of regression testing can be effective in identifying some regression faults, but it is limited by the quality of the existing test suite. Due to the cost of testing, developers build test suites by finding acceptable tradeoffs between cost and thoroughness of the tests. As a result, these test suites tend to exercise only a small subset of the program's functionality and may be inadequate for testing the changes in a program. To address this issue, we propose a novel approach called Behavioral Regression Testing (BERT). Given two versions of a program, BERT identifies behavioral differences between the two versions through dynamical analysis, in three steps. First, it generates a large number of test inputs that focus on the changed parts of the code. Second, it runs the generated test inputs on the old and new versions of the code and identifies differences in the tests' behavior. Third, it analyzes the identified differences and presents them to the developers. By focusing on a subset of the code and leveraging differential behavior, BERT can provide developers with more (and more detailed) information than traditional regression testing techniques. To evaluate BERT, we implemented it as a plug-in for Eclipse, a popular Integrated Development Environment, and used the plug-in to perform a preliminary study on two programs. The results of our study are promising, in that BERT was able to identify true regression faults in the programs.",
"title": ""
},
{
"docid": "7b4400c6ef5801e60a6f821810538381",
"text": "A CMOS self-biased fully differential amplifier is presented. Due to the self-biasing structure of the amplifier and its associated negative feedback, the amplifier is compensated to achieve low sensitivity to process, supply voltage and temperature (PVT) variations. The output common-mode voltage of the amplifier is adjusted through the same biasing voltages provided by the common-mode feedback (CMFB) circuit. The amplifier core is based on a simple structure that uses two CMOS inverters to amplify the input differential signal. Despite its simple structure, the proposed amplifier is attractive to a wide range of applications, specially those requiring low power and small silicon area. As two examples, a sample-and-hold circuit and a second order multi-bit sigma-delta modulator either employing the proposed amplifier are presented. Besides these application examples, a set of amplifier performance parameters is given.",
"title": ""
},
{
"docid": "3edab364abeabc97b55e8d711217b734",
"text": "To facilitate collaboration over sensitive data, we present DataSynthesizer, a tool that takes a sensitive dataset as input and generates a structurally and statistically similar synthetic dataset with strong privacy guarantees. The data owners need not release their data, while potential collaborators can begin developing models and methods with some confidence that their results will work similarly on the real dataset. The distinguishing feature of DataSynthesizer is its usability --- the data owner does not have to specify any parameters to start generating and sharing data safely and effectively.\n DataSynthesizer consists of three high-level modules --- DataDescriber, DataGenerator and ModelInspector. The first, DataDescriber, investigates the data types, correlations and distributions of the attributes in the private dataset, and produces a data summary, adding noise to the distributions to preserve privacy. DataGenerator samples from the summary computed by DataDescriber and outputs synthetic data. ModelInspector shows an intuitive description of the data summary that was computed by DataDescriber, allowing the data owner to evaluate the accuracy of the summarization process and adjust any parameters, if desired.\n We describe DataSynthesizer and illustrate its use in an urban science context, where sharing sensitive, legally encumbered data between agencies and with outside collaborators is reported as the primary obstacle to data-driven governance.\n The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/DataSynthesizer.",
"title": ""
},
{
"docid": "34208fafbb3009a1bb463e3d8d983e61",
"text": "A large and growing number of web pages display contextual advertising based on keywords automatically extracted from the text of the page, and this is a substantial source of revenue supporting the web today. Despite the importance of this area, little formal, published research exists. We describe a system that learns how to extract keywords from web pages for advertisement targeting. The system uses a number of features, such as term frequency of each potential keyword, inverse document frequency, presence in meta-data, and how often the term occurs in search query logs. The system is trained with a set of example pages that have been hand-labeled with \"relevant\" keywords. Based on this training, it can then extract new keywords from previously unseen pages. Accuracy is substantially better than several baseline systems.",
"title": ""
},
{
"docid": "1a66727305984ae359648e4bd3e75ba2",
"text": "Self-organizing models constitute valuable tools for data visualization, clustering, and data mining. Here, we focus on extensions of basic vector-based models by recursive computation in such a way that sequential and tree-structured data can be processed directly. The aim of this article is to give a unified review of important models recently proposed in literature, to investigate fundamental mathematical properties of these models, and to compare the approaches by experiments. We first review several models proposed in literature from a unifying perspective, thereby making use of an underlying general framework which also includes supervised recurrent and recursive models as special cases. We shortly discuss how the models can be related to different neuron lattices. Then, we investigate theoretical properties of the models in detail: we explicitly formalize how structures are internally stored in different context models and which similarity measures are induced by the recursive mapping onto the structures. We assess the representational capabilities of the models, and we shortly discuss the issues of topology preservation and noise tolerance. The models are compared in an experiment with time series data. Finally, we add an experiment for one context model for tree-structured data to demonstrate the capability to process complex structures.",
"title": ""
},
{
"docid": "ca4696183f72882d2f69cc17ab761ef3",
"text": "Entropy, as it relates to dynamical systems, is the rate of information production. Methods for estimation of the entropy of a system represented by a time series are not, however, well suited to analysis of the short and noisy data sets encountered in cardiovascular and other biological studies. Pincus introduced approximate entropy (ApEn), a set of measures of system complexity closely related to entropy, which is easily applied to clinical cardiovascular and other time series. ApEn statistics, however, lead to inconsistent results. We have developed a new and related complexity measure, sample entropy (SampEn), and have compared ApEn and SampEn by using them to analyze sets of random numbers with known probabilistic character. We have also evaluated cross-ApEn and cross-SampEn, which use cardiovascular data sets to measure the similarity of two distinct time series. SampEn agreed with theory much more closely than ApEn over a broad range of conditions. The improved accuracy of SampEn statistics should make them useful in the study of experimental clinical cardiovascular and other biological time series.",
"title": ""
},
{
"docid": "71cc535dcae1b50f9fe3314f4140d916",
"text": "Information and communications technology has fostered the rise of the sharing economy, enabling individuals to share excess capacity. In this paper, we focus on Airbnb.com, which is among the most prominent examples of the sharing economy. We take the perspective of an accommodation provider and investigate the concept of trust, which facilitates complete strangers to form temporal C2C relationships on Airbnb.com. In fact, the implications of trust in the sharing economy fundamentally differ to related online industries. In our research model, we investigate the formation of trust by incorporating two antecedents – ‘Disposition to trust’ and ‘Familiarity with Airbnb.com’. Furthermore, we differentiate between ‘Trust in Airbnb.com’ and ‘Trust in renters’ and examine their implications on two provider intentions. To seek support for our research model, we conducted a survey with 189 participants. The results show that both trust constructs are decisive to successfully initiate a sharing deal between two parties.",
"title": ""
},
{
"docid": "3335a737dbd959b6ea69b240a053f1e9",
"text": "The amount of effort needed to maintain a software system is related to the technical quality of the source code of that system. The ISO 9126 model for software product quality recognizes maintainability as one of the 6 main characteristics of software product quality, with adaptability, changeability, stability, and testability as subcharacteristics of maintainability. Remarkably, ISO 9126 does not provide a consensual set of measures for estimating maintainability on the basis of a system's source code. On the other hand, the maintainability index has been proposed to calculate a single number that expresses the maintainability of a system. In this paper, we discuss several problems with the MI, and we identify a number of requirements to be fulfilled by a maintainability model to be usable in practice. We sketch a new maintainability model that alleviates most of these problems, and we discuss our experiences with using such as system for IT management consultancy activities.",
"title": ""
},
{
"docid": "82d7a2b6045e90731d510ce7cce1a93c",
"text": "INTRODUCTION\nExtracellular vesicles (EVs) are critical mediators of intercellular communication, capable of regulating the transcriptional landscape of target cells through horizontal transmission of biological information, such as proteins, lipids, and RNA species. This capability highlights their potential as novel targets for disease intervention. Areas covered: This review focuses on the emerging importance of discovery proteomics (high-throughput, unbiased quantitative protein identification) and targeted proteomics (hypothesis-driven quantitative protein subset analysis) mass spectrometry (MS)-based strategies in EV biology, especially exosomes and shed microvesicles. Expert commentary: Recent advances in MS hardware, workflows, and informatics provide comprehensive, quantitative protein profiling of EVs and EV-treated target cells. This information is seminal to understanding the role of EV subtypes in cellular crosstalk, especially when integrated with other 'omics disciplines, such as RNA analysis (e.g., mRNA, ncRNA). Moreover, high-throughput MS-based proteomics promises to provide new avenues in identifying novel markers for detection, monitoring, and therapeutic intervention of disease.",
"title": ""
},
{
"docid": "a63cc19137ead27acf5530c0bdb924f5",
"text": "We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications.",
"title": ""
},
{
"docid": "f3083088c9096bb1932b139098cbd181",
"text": "OBJECTIVE\nMaiming and death due to dog bites are uncommon but preventable tragedies. We postulated that patients admitted to a level I trauma center with dog bites would have severe injuries and that the gravest injuries would be those caused by pit bulls.\n\n\nDESIGN\nWe reviewed the medical records of patients admitted to our level I trauma center with dog bites during a 15-year period. We determined the demographic characteristics of the patients, their outcomes, and the breed and characteristics of the dogs that caused the injuries.\n\n\nRESULTS\nOur Trauma and Emergency Surgery Services treated 228 patients with dog bite injuries; for 82 of those patients, the breed of dog involved was recorded (29 were injured by pit bulls). Compared with attacks by other breeds of dogs, attacks by pit bulls were associated with a higher median Injury Severity Scale score (4 vs. 1; P = 0.002), a higher risk of an admission Glasgow Coma Scale score of 8 or lower (17.2% vs. 0%; P = 0.006), higher median hospital charges ($10,500 vs. $7200; P = 0.003), and a higher risk of death (10.3% vs. 0%; P = 0.041).\n\n\nCONCLUSIONS\nAttacks by pit bulls are associated with higher morbidity rates, higher hospital charges, and a higher risk of death than are attacks by other breeds of dogs. Strict regulation of pit bulls may substantially reduce the US mortality rates related to dog bites.",
"title": ""
},
{
"docid": "7f69fbcda9d6ee11d5cc1591a88b6403",
"text": "Voice conversion is defined as modifying the speech signal of one speaker (source speaker) so that it sounds as if it had been pronounced by a different speaker (target speaker). This paper describes a system for efficient voice conversion. A novel mapping function is presented which associates the acoustic space of the source speaker with the acoustic space of the target speaker. The proposed system is based on the use of a Gaussian Mixture Model, GMM, to model the acoustic space of a speaker and a pitch synchronous harmonic plus noise representation of the speech signal for prosodic modifications. The mapping function is a continuous parametric function which takes into account the probab ilistic classification provided by the mixture model (GMM). Evaluation by objective tests showed that the proposed system was able to reduce the perceptual distance between the source and target speaker by 70%. Formal listening tests also showed that 97% of the converted speech was judged to be spoken from the target speaker while maintaining high speech qua lity.",
"title": ""
},
{
"docid": "1fdecf272795a163d32838022247568e",
"text": "This paper presents an anisotropy-based position estimation approach taking advantage of saturation effects in permanent magnet synchronous machines (PMSM). Due to magnetic anisotropies of the electrical machine, current responses to high-frequency voltage excitations contain rotor position information. Therefore, the rotor position can be estimated by means of these current responses. The relation between the high-frequency current changes, the applied phase voltages and the rotor position is given by the inverse inductance matrix of the machine. In this paper, an analytical model of the inverse inductance matrix considering secondary anisotropies and saturation effects is proposed. It is shown that the amount of rotor position information contained in these current changes depends on the direction of the voltage excitation and the operating point. By means of this knowledge, a position estimation approach for slowly-sampled control systems is developed. Experimental results show the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "c16a6e967bec774cdefacc110753743e",
"text": "In this letter, a top-gated field-effect device (FED) manufactured from monolayer graphene is investigated. Except for graphene deposition, a conventional top-down CMOS-compatible process flow is applied. Carrier mobilities in graphene pseudo-MOS structures are compared to those obtained from the top-gated Graphene-FEDs. The extracted values exceed the universal mobility of silicon and silicon-on-insulator MOSFETs",
"title": ""
},
{
"docid": "a31d88d98a3a335979a271c9bc57b86f",
"text": "Sympathetic nervous system (SNS) activity plays a significant role in cardiovascular control. Preejection period (PEP) is a noninvasive biomarker that reflects SNS activity. In this paper, unobtrusive estimation of PEP of the heart using ballistocardiogram (BCG) and electrocardiogram (ECG) signals is investigated. Although previous work has shown that the time intervals from ECG R-peak to BCG I and J peaks are correlated with PEP, relying on a single BCG beat can be prone to errors. An approach is proposed based on multiple regression and use of initial training data sets with a reference standard, impedance cardiography (ICG). For evaluation, healthy subjects were asked to stand on a force plate to record BCG and ECG signals. Regression coefficients were obtained using leave-one-out cross-validation and the true PEP values were obtained using ECG and ICG. Regression coefficients were averaged over two different recordings from the same subjects. The estimation performance was evaluated based on the data, via leave-one-out cross-validation. Multiple regression is shown to reduce the mean absolute error and the root mean square error, and has a reduced confidence interval compared with the models based on only a single feature. This paper shows that the fusion of multiple timing intervals can be useful for improved PEP estimation.",
"title": ""
},
{
"docid": "d74486ee2c479d6f644630e38f90f386",
"text": "ion? Does it supplant the real or is there, in it, reality itself? Like so many true things, this one doesn't resolve itself to a black or a white. Nor is it gray. It is, along with the rest of life, black/white. Both/neither.\" {John Perry Barlow 1995, p. 56) 1. What Is Infrastructure? People who study how technology affects organizational transformation increasingly recognize its dual, paradoxical nature. It is both engine and barrier for change; both customizable and rigid; both inside and outside organizational practices. It is product and process. Some authors have analyzed this seeming paradox as structuration: (after Giddens)—technological rigidities give rise to adaptations which in turn require calibration and standardization. Over time, structureagency relations re-form dialectically (Orlikowski 1991, Davies and Mitchell 1994, Korpela 1994). This paradox is integral to large scale, dispersed technologies (Brown 1047-7047/96/0701/0111$01.25 Copyright © 1996, Institute for Operations Research and the Management Sciences INFORMATION SYSTEMS RESEARCH Vol. 7, No. 1, March 1996 111",
"title": ""
},
{
"docid": "2e6b034cbb73d91b70e3574a06140621",
"text": "ETHNOPHARMACOLOGICAL RELEVANCE\nBitter melon (Momordica charantia L.) has been widely used as an traditional medicine treatment for diabetic patients in Asia. In vitro and animal studies suggested its hypoglycemic activity, but limited human studies are available to support its use.\n\n\nAIM OF STUDY\nThis study was conducted to assess the efficacy and safety of three doses of bitter melon compared with metformin.\n\n\nMATERIALS AND METHODS\nThis is a 4-week, multicenter, randomized, double-blind, active-control trial. Patients were randomized into 4 groups to receive bitter melon 500 mg/day, 1,000 mg/day, and 2,000 mg/day or metformin 1,000 mg/day. All patients were followed for 4 weeks.\n\n\nRESULTS\nThere was a significant decline in fructosamine at week 4 of the metformin group (-16.8; 95% CI, -31.2, -2.4 μmol/L) and the bitter melon 2,000 mg/day group (-10.2; 95% CI, -19.1, -1.3 μmol/L). Bitter melon 500 and 1,000 mg/day did not significantly decrease fructosamine levels (-3.5; 95% CI -11.7, 4.6 and -10.3; 95% CI -22.7, 2.2 μmol/L, respectively).\n\n\nCONCLUSIONS\nBitter melon had a modest hypoglycemic effect and significantly reduced fructosamine levels from baseline among patients with type 2 diabetes who received 2,000 mg/day. However, the hypoglycemic effect of bitter melon was less than metformin 1,000 mg/day.",
"title": ""
},
{
"docid": "dbf5fd755e91c4a67446dcce2d8759ba",
"text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. .",
"title": ""
},
{
"docid": "938aecbc66963114bf8753d94f7f58ed",
"text": "OBJECTIVE\nTo observe the clinical effect of bee-sting (venom) therapy in the treatment of rheumatoid arthritis (RA).\n\n\nMETHODS\nOne hundred RA patients were randomly divided into medication (control) group and bee-venom group, with 50 cases in each. Patients of control group were treated with oral administration of Methotrexate (MTX, 7.5 mg/w), Sulfasalazine (0.5 g,t. i.d.), Meloxicam (Mobic,7. 5 mg, b. i. d.); and those of bee-venom group treated with Bee-sting of Ashi-points and the above-mentioned Western medicines. Ashi-points were selected according to the position of RA and used as the main acupoints, supplemented with other acupoints according to syndrome differentiation. The treatment was given once every other day and all the treatments lasted for 3 months.\n\n\nRESULTS\nCompared with pre-treatment, scores of joint swelling degree, joint activity, pain, and pressing pain, joint-swelling number, grasp force, 15 m-walking duration, morning stiff duration in bee-venom group and medication group were improved significantly (P<0.05, 0.01). Comparison between two groups showed that after the therapy, scores of joint swelling, pain and pressing pain, joint-swelling number and morning stiff duration, and the doses of the administered MTX and Mobic in bee-venom group were all significantly lower than those in medication group (P<0.05, 0.01); whereas the grasp force in been-venom group was markedly higher than that in medication group (P<0.05). In addition, the relapse rate of bee-venom group was obviously lower than that of medication group (P<0.05; 12% vs 32%).\n\n\nCONCLUSION\nCombined application of bee-venom therapy and medication is superior to simple use of medication in relieving RA, and when bee-sting therapy used, the commonly-taken doses of western medicines may be reduced, and the relapse rate gets lower.",
"title": ""
}
] |
scidocsrr
|
0ac76efb44bc30022c891168e76bdec6
|
UNIQ: Uniform Noise Injection for the Quantization of Neural Networks
|
[
{
"docid": "b9aa1b23ee957f61337e731611a6301a",
"text": "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFatNet opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 4-bit gradients to get 47% top-1 accuracy on ImageNet validation set.1 The DoReFa-Net AlexNet model is released publicly.",
"title": ""
}
] |
[
{
"docid": "a65b11ebb320e4883229f4a50d51ae2f",
"text": "Vast quantities of text are becoming available in electronic form, ranging from published documents (e.g., electronic dictionaries, encyclopedias, libraries and archives for information retrieval services), to private databases (e.g., marketing information, legal records, medical histories), to personal email and faxes. Online information services are reaching mainstream computer users. There were over 15 million Internet users in 1993, and projections are for 30 million in 1997. With media attention reaching all-time highs, hardly a day goes by without a new article on the National Information Infrastructure, digital libraries, networked services, digital convergence or intelligent agents. This attention is moving natural language processing along the critical path for all kinds of novel applications.",
"title": ""
},
{
"docid": "38382c04e7dc46f5db7f2383dcae11fb",
"text": "Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.",
"title": ""
},
{
"docid": "1cf9b5be1bc849a25a45123c95ac6217",
"text": "In the discipline of accounting, the resource-event-agent (REA) ontology is a well accepted conceptual accounting framework to analyze the economic phenomena within and across enterprises. Accordingly, it seems to be appropriate to use REA in the requirements elicitation to develop an information architecture of accounting and enterprise information systems. However, REA has received comparatively less attention in the field of business informatics and computer science. Some of the reasons may be that the REA ontology despite of its well grounded core concepts is (1) sometimes vague in the definition of the relationships between these core concepts, (2) misses a precise language to describe the models, and (3) does not come with an easy to understand graphical notation. Accordingly, we have started developing a domain specific modeling language specifically dedicated to REA models and corresponding tool support to overcome these limitations. In this paper we present our REA DSL which supports the basic set of REA concepts.",
"title": ""
},
{
"docid": "6534e22a4160d547094c0bb38588b5d5",
"text": "This paper presents the comparative analysis between constant duty cycle and Perturb & Observe (P&O) algorithm for extracting the power from Photovoltaic Array (PVA). Because of nonlinear characteristics of PV cell, the maximum power can be extract under particular voltage condition. Therefore, Maximum Power Point Tracking (MPPT) algorithms are used in PVA to maximize the output power. In this paper the MPPT algorithm is implemented using Ćuk converter. The dynamics of PVA is simulated at different solar irradiance and cell temperature. The P&O MPPT technique is a direct control method enables ease to implement and less complexity.",
"title": ""
},
{
"docid": "d69b8c991e66ff274af63198dba2ee01",
"text": "Nowadays, there are two significant tendencies, how to process the enormous amount of data, big data, and how to deal with the green issues related to sustainability and environmental concerns. An interesting question is whether there are inherent correlations between the two tendencies in general. To answer this question, this paper firstly makes a comprehensive literature survey on how to green big data systems in terms of the whole life cycle of big data processing, and then this paper studies the relevance between big data and green metrics and proposes two new metrics, effective energy efficiency and effective resource efficiency in order to bring new views and potentials of green metrics for the future times of big data.",
"title": ""
},
{
"docid": "9cf3df49790c1d2107035ef868f8be1e",
"text": "As computational thinking becomes a fundamental skill for the 21st century, K-12 teachers should be exposed to computing principles. This paper describes the implementation and evaluation of a computational thinking module in a required course for elementary and secondary education majors. We summarize the results from open-ended and multiple-choice questionnaires given both before and after the module to assess the students' attitudes toward and understanding of computational thinking. The results suggest that given relevant information about computational thinking, education students' attitudes toward computer science becomes more favorable and they will be more likely to integrate computing principles in their future teaching.",
"title": ""
},
{
"docid": "669c6fee3153c88a8e8a35d6331a11ca",
"text": "We present a method for classifying products into a set of known categories by using supervised learning. That is, given a product with accompanying informational details such as name and descriptions, we group the product into a particular category with similar products, e.g., ‘Electronics’ or ‘Automotive’. To do this, we analyze product catalog information from different distributors on Amazon.com to build features for a classifier. Our implementation results show significant improvement over baseline results. Taking into particular criteria, our implementation is potentially able to substantially increase automation of categorization of products. General Terms Supervised and Unsupervised Learning, E-Commerce",
"title": ""
},
{
"docid": "ccf7390abc2924e4d2136a2b82639115",
"text": "The proposition of increased innovation in network applications and reduced cost for network operators has won over the networking world to the vision of software-defined networking (SDN). With the excitement of holistic visibility across the network and the ability to program network devices, developers have rushed to present a range of new SDN-compliant hardware, software, and services. However, amidst this frenzy of activity, one key element has only recently entered the debate: Network Security. In this paper, security in SDN is surveyed presenting both the research community and industry advances in this area. The challenges to securing the network from the persistent attacker are discussed, and the holistic approach to the security architecture that is required for SDN is described. Future research directions that will be key to providing network security in SDN are identified.",
"title": ""
},
{
"docid": "339aa2d53be2cf1215caa142ad5c58d2",
"text": "A true random number generator (TRNG) is an important component in cryptographic systems. Designing a fast and secure TRNG in an FPGA is a challenging task. In this paper we analyze the TRNG designed by Sunar et al. based on XOR of the outputs of many oscillator rings. We propose an enhanced TRNG that does not require post-processing to pass statistical tests and with better randomness characteristics on the output. We have shown by experiment that the frequencies of the equal length oscillator rings in the TRNG are not identical but different due to the placement of the inverters in the FPGA. We have implemented our proposed TRNG in an Altera Cyclone II FPGA. Our implementation has passed the NIST and DIEHARD statistical tests with a throughput of 100 Mbps and with a usage of less than 100 logic elements in the FPGA.",
"title": ""
},
{
"docid": "b4352773c64dea1e8d354dad0cd76dfa",
"text": "Objective: to describe the epidemiological and sociodemographic characteristics of patients hospitalized in an ICU. Method: an epidemiological, descriptive and retrospective study. Population: 695 patients admitted from January to December 2011. The data collected were statistically analyzed with both absolute and relative frequency distribution. Results: 61.6% of the patients are male, aged 40 to 69 years, and most of them came from the surgery rooms. The most frequent reason for admission was diseases of the circulatory system (23.3%). At discharge from the ICU, 72.4% of the patients were sent to other units of the same institution, 31.1% to the intermediate care unit, and 20.4% died, of which 24.6% from diseases of the circulatory system. The afternoon shift had 45.8% of the admissions and 53.3% of the discharges. Conclusion: the description of the sociodemographic and epidemiological features guides the planning of nursing actions, providing a better quality service.",
"title": ""
},
{
"docid": "55a1bedc3aa007a4e8bbc77d6f710d7f",
"text": "The purpose of the present study was to develop and validate a self-report instrument that measures the nature of the coach-athlete relationship. Jowett et al.'s (Jowett & Meek, 2000; Jowett, in press) qualitative case studies and relevant literature were used to generate items for an instrument that measures affective, cognitive, and behavioral aspects of the coach-athlete relationship. Two studies were carried out in an attempt to assess content, predictive, and construct validity, as well as internal consistency, of the Coach-Athlete Relationship Questionnaire (CART-Q), using two independent British samples. Principal component analysis and confirmatory factor analysis were used to reduce the number of items, identify principal components, and confirm the latent structure of the CART-Q. Results supported the multidimensional nature of the coach-athlete relationship. The latent structure of the CART-Q was underlined by the latent variables of coaches' and athletes' Closeness (emotions), Commitment (cognitions), and Complementarity (behaviors).",
"title": ""
},
{
"docid": "b3eefd1fa34f0eb02541b598881396f9",
"text": "We present a complete scalable system for 6 d.o.f. camera tracking based on natural features. Crucially, the calculation is based only on pre-captured reference images and previous estimates of the camera pose and is hence suitable for online applications. We match natural features in the current frame to two spatially separated reference images. We overcome the wide baseline matching problem by matching to the previous frame and transferring point positions to the reference images. We then minimize deviations from the two-view and three-view constraints between the reference images and the current frame as a function of the camera position parameters. We stabilize this calculation using a recursive form of temporal regularization that is similar in spirit to the Kalman filter. We can track camera pose over hundreds of frames and realistically integrate virtual objects with only slight jitter.",
"title": ""
},
{
"docid": "109525927d05ea8dcf4e2785204895f3",
"text": "Information network embedding is an effective way for efficient graph analytics. However, it still faces with computational challenges in problems such as link prediction and node recommendation, particularly with increasing scale of networks. Hashing is a promising approach for accelerating these problems by orders of magnitude. However, no prior studies have been focused on seeking binary codes for information networks to preserve high-order proximity. Since matrix factorization (MF) unifies and outperforms several well-known embedding methods with high-order proximity preserved, we propose a MF-based \\underlineI nformation \\underlineN etwork \\underlineH ashing (INH-MF) algorithm, to learn binary codes which can preserve high-order proximity. We also suggest Hamming subspace learning, which only updates partial binary codes each time, to scale up INH-MF. We finally evaluate INH-MF on four real-world information network datasets with respect to the tasks of node classification and node recommendation. The results demonstrate that INH-MF can perform significantly better than competing learning to hash baselines in both tasks, and surprisingly outperforms network embedding methods, including DeepWalk, LINE and NetMF, in the task of node recommendation. The source code of INH-MF is available online\\footnote\\urlhttps://github.com/DefuLian/network .",
"title": ""
},
{
"docid": "7697aa5665f4699f2000779db2b0d24f",
"text": "The majority of smart devices used nowadays (e.g., smartphones, laptops, tablets) is capable of both Wi-Fi and Bluetooth wireless communications. Both network interfaces are identified by a unique 48-bits MAC address, assigned during the manufacturing process and unique worldwide. Such addresses, fundamental for link-layer communications and contained in every frame transmitted by the device, can be easily collected through packet sniffing and later used to perform higher level analysis tasks (user tracking, crowd density estimation, etc.). In this work we propose a system to pair the Wi-Fi and Bluetooth MAC addresses belonging to a physical unique device, starting from packets captured through a network of wireless sniffers. We propose several algorithms to perform such a pairing and we evaluate their performance through experiments in a controlled scenario. We show that the proposed algorithms can pair the MAC addresses with good accuracy. The findings of this paper may be useful to improve the precision of indoor localization and crowd density estimation systems and open some questions on the privacy issues of Wi-Fi and Bluetooth enabled devices.",
"title": ""
},
{
"docid": "93fad9723826fcf99ae229b4e7298a31",
"text": "In this work, we provide the first construction of Attribute-Based Encryption (ABE) for general circuits. Our construction is based on the existence of multilinear maps. We prove selective security of our scheme in the standard model under the natural multilinear generalization of the BDDH assumption. Our scheme achieves both Key-Policy and Ciphertext-Policy variants of ABE. Our scheme and its proof of security directly translate to the recent multilinear map framework of Garg, Gentry, and Halevi.",
"title": ""
},
{
"docid": "2d0765e6b695348dea8822f695dcbfa1",
"text": "Social networks are currently gaining increasing impact especially in the light of the ongoing growth of web-based services like facebook.com. A central challenge for the social network analysis is the identification of key persons within a social network. In this context, the article aims at presenting the current state of research on centrality measures for social networks. In view of highly variable findings about the quality of various centrality measures, we also illustrate the tremendous importance of a reflected utilization of existing centrality measures. For this purpose, the paper analyzes five common centrality measures on the basis of three simple requirements for the behavior of centrality measures.",
"title": ""
},
{
"docid": "30ae1d2d45e11c8f6212ff0a54abec7a",
"text": "This paper describes the fifth year of the Sentiment Analysis in Twitter task. SemEval-2017 Task 4 continues with a rerun of the subtasks of SemEval-2016 Task 4, which include identifying the overall sentiment of the tweet, sentiment towards a topic with classification on a twopoint and on a five-point ordinal scale, and quantification of the distribution of sentiment towards a topic across a number of tweets: again on a two-point and on a five-point ordinal scale. Compared to 2016, we made two changes: (i) we introduced a new language, Arabic, for all subtasks, and (ii) we made available information from the profiles of the Twitter users who posted the target tweets. The task continues to be very popular, with a total of 48 teams participating this year.",
"title": ""
},
{
"docid": "4d26d3823e3889c22fe517857a49d508",
"text": "As an object moves through the field of view of a camera, the images of the object may change dramatically. This is not simply due to the translation of the object across the image plane. Rather, complications arise due to the fact that the object undergoes changes in pose relative to the viewing camera, changes in illumination relative to light sources, and may even become partially or fully occluded. In this paper, we develop an efficient, general framework for object tracking—one which addresses each of these complications. We first develop a computationally efficient method for handling the geometric distortions produced by changes in pose. We then combine geometry and illumination into an algorithm that tracks large image regions using no more computation than would be required to track with no accommodation for illumination changes. Finally, we augment these methods with techniques from robust statistics and treat occluded regions on the object as statistical outliers. Throughout, we present experimental results performed on live video sequences demonstrating the effectiveness and efficiency of our methods.",
"title": ""
},
{
"docid": "27316b23e7a7cd163abd40f804caf61b",
"text": "Attention based recurrent neural networks (RNN) have shown a great success for question answering (QA) in recent years. Although significant improvements have been achieved over the non-attentive models, the position information is not well studied within the attention-based framework. Motivated by the effectiveness of using the word positional context to enhance information retrieval, we assume that if a word in the question (i.e., question word) occurs in an answer sentence, the neighboring words should be given more attention since they intuitively contain more valuable information for question answering than those far away. Based on this assumption, we propose a positional attention based RNN model, which incorporates the positional context of the question words into the answers' attentive representations. Experiments on two benchmark datasets show the great advantages of our proposed model. Specifically, we achieve a maximum improvement of 8.83% over the classical attention based RNN model in terms of mean average precision. Furthermore, our model is comparable to if not better than the state-of-the-art approaches for question answering.",
"title": ""
}
] |
scidocsrr
|
9217d2b8e887d8b2b61a954008f06d9b
|
A Study Using $n$-gram Features for Text Categorization
|
[
{
"docid": "97a7ebf3cffa55f97e28ca42d1239131",
"text": "The eeect of selecting varying numbers and kinds of features for use in predicting category membership was investigated on the Reuters and MUC-3 text categorization data sets. Good categorization performance was achieved using a statistical classiier and a proportional assignment strategy. The optimal feature set size for word-based indexing was found to be surprisingly low (10 to 15 features) despite the large training sets. The extraction of new text features by syntactic analysis and feature clustering was investigated on the Reuters data set. Syntactic indexing phrases, clusters of these phrases, and clusters of words were all found to provide less eeective representations than individual words.",
"title": ""
}
] |
[
{
"docid": "bfdbed47fc25bb6efbb649dd13fcdedf",
"text": "Passive haptic feedback is very compelling, but a different physical object is needed for each virtual object requiring haptic feedback. I propose to enhance passive haptics by exploiting visual dominance, enabling a single physical object to provide haptic feedback for many differently shaped virtual objects. Potential applications include virtual prototyping, redirected walking, entertainment, art, and training.",
"title": ""
},
{
"docid": "793082d8e5367625145a7d7993bec19f",
"text": "Future advanced driver assistant systems put high demands on the environmental perception especially in urban environments. Today's on-board sensors and on-board algorithms still do not reach a satisfying level of development from the point of view of robustness and availability. Thus, map data is often used as an additional data input to support the on-board sensor system and algorithms. The usage of map data requires a highly correct pose within the map even in cases of positioning errors by global navigation satellite systems or geometrical errors in the map data. In this paper we propose and compare two approaches for map-relative localization exclusively using a lane-level map. These approaches deliberately avoid the usage of detailed a priori maps containing point-landmarks, grids or road-markings. Additionally, we propose a grid-based on-board fusion of road-marking information and stationary obstacles addressing the problem of missing or incomplete road-markings in urban scenarios.",
"title": ""
},
{
"docid": "f7daa0d175d4a7ae8b0869802ff3c4ab",
"text": "Several consumer speech devices feature voice interfaces that perform on-device keyword spotting to initiate user interactions. Accurate on-device keyword spotting within a tight CPU budget is crucial for such devices. Motivated by this, we investigated two ways to improve deep neural network (DNN) acoustic models for keyword spotting without increasing CPU usage. First, we used low-rank weight matrices throughout the DNN. This allowed us to increase representational power by increasing the number of hidden nodes per layer without changing the total number of multiplications. Second, we used knowledge distilled from an ensemble of much larger DNNs used only during training. We systematically evaluated these two approaches on a massive corpus of far-field utterances. Alone both techniques improve performance and together they combine to give significant reductions in false alarms and misses without increasing CPU or memory usage.",
"title": ""
},
{
"docid": "af3297de35d49f774e2f31f31b09fd61",
"text": "This paper explores the phenomena of the emergence of the use of artificial intelligence in teaching and learning in higher education. It investigates educational implications of emerging technologies on the way students learn and how institutions teach and evolve. Recent technological advancements and the increasing speed of adopting new technologies in higher education are explored in order to predict the future nature of higher education in a world where artificial intelligence is part of the fabric of our universities. We pinpoint some challenges for institutions of higher education and student learning in the adoption of these technologies for teaching, learning, student support, and administration and explore further directions for research.",
"title": ""
},
{
"docid": "1e7c094acc791dcfad54e7eb9bf3a1fe",
"text": "Steganography is an ancient art. With the advent of computers, we have vast accessible bodies of data in which to hide information, and increasingly sophisticated techniques with which to analyze and recover that information. While much of the recent research in steganography has been centered on hiding data in images, many of the solutions that work for images are more complicated when applied to natural language text as a cover medium. Many approaches to steganalysis attempt to detect statistical anomalies in cover data which predict the presence of hidden information. Natural language cover texts must not only pass the statistical muster of automatic analysis, but also the minds of human readers. Linguistically naïve approaches to the problem use statistical frequency of letter combinations or random dictionary words to encode information. More sophisticated approaches use context-free grammars to generate syntactically correct cover text which mimics the syntax of natural text. None of these uses meaning as a basis for generation, and little attention is paid to the semantic cohesiveness of a whole text as a data point for statistical attack. This paper provides a basic introduction to steganography and steganalysis, with a particular focus on text steganography. Text-based information hiding techniques are discussed, providing motivation for moving toward linguistic steganography and steganalysis. We highlight some of the problems inherent in text steganography as well as issues with existing solutions, and describe linguistic problems with character-based, lexical, and syntactic approaches. Finally, the paper explores how a semantic and rhetorical generation approach suggests solutions for creating more believable cover texts, presenting some current and future issues in analysis and generation. The paper is intended to be both general enough that linguists without training in information security and computer science can understand the material, and specific enough that the linguistic and computational problems are described in adequate detail to justify the conclusions suggested.",
"title": ""
},
{
"docid": "32287cfcf9978e04bea4ab5f01a6f5da",
"text": "OBJECTIVE\nThe purpose of this study was to examine the relationship of performance on the Developmental Test of Visual-Motor Integration (VMI; Beery, 1997) to handwriting legibility in children attending kindergarten. The relationship of using lined versus unlined paper on letter legibility, based on a modified version of the Scale of Children's Readiness in PrinTing (Modified SCRIPT; Weil & Cunningham Amundson, 1994) was also investigated.\n\n\nMETHOD\nFifty-four typically developing kindergarten students were administered the VMI; 30 students completed the Modified SCRIPT with unlined paper, 24 students completed the Modified SCRIPT with lined paper. Students were assessed in the first quarter of the kindergarten school year and scores were analyzed using correlational and nonparametric statistical measures.\n\n\nRESULTS\nStrong positive relationships were found between VMI assessment scores and student's ability to legibly copy letterforms. Students who could copy the first nine forms on the VMI performed significantly better than students who could not correctly copy the first nine VMI forms on both versions of the Modified SCRIPT.\n\n\nCONCLUSION\nVisual-motor integration skills were shown to be related to the ability to copy letters legibly. These findings support the research of Weil and Cunningham Amundson. Findings from this study also support the conclusion that there is no significant difference in letter writing legibility between students who use paper with or without lines.",
"title": ""
},
{
"docid": "733e5961428e5aad785926e389b9bd75",
"text": "OBJECTIVE\nPeer support can be defined as the process of giving and receiving nonprofessional, nonclinical assistance from individuals with similar conditions or circumstances to achieve long-term recovery from psychiatric, alcohol, and/or other drug-related problems. Recently, there has been a dramatic rise in the adoption of alternative forms of peer support services to assist recovery from substance use disorders; however, often peer support has not been separated out as a formalized intervention component and rigorously empirically tested, making it difficult to determine its effects. This article reports the results of a literature review that was undertaken to assess the effects of peer support groups, one aspect of peer support services, in the treatment of addiction.\n\n\nMETHODS\nThe authors of this article searched electronic databases of relevant peer-reviewed research literature including PubMed and MedLINE.\n\n\nRESULTS\nTen studies met our minimum inclusion criteria, including randomized controlled trials or pre-/post-data studies, adult participants, inclusion of group format, substance use-related, and US-conducted studies published in 1999 or later. Studies demonstrated associated benefits in the following areas: 1) substance use, 2) treatment engagement, 3) human immunodeficiency virus/hepatitis C virus risk behaviors, and 4) secondary substance-related behaviors such as craving and self-efficacy. Limitations were noted on the relative lack of rigorously tested empirical studies within the literature and inability to disentangle the effects of the group treatment that is often included as a component of other services.\n\n\nCONCLUSION\nPeer support groups included in addiction treatment shows much promise; however, the limited data relevant to this topic diminish the ability to draw definitive conclusions. More rigorous research is needed in this area to further expand on this important line of research.",
"title": ""
},
{
"docid": "dec1296463199214ef67c1c9f5b848be",
"text": "The scope of this second edition of the introduction to fundamental distributed programming abstractions has been extended to cover 'Byzantine fault tolerance'. It includes algorithms to Whether rgui and function or matrix. Yes no plotting commands the same dim. For scenarios such as is in which are available packages still! The remote endpoint the same model second example in early. Variables are omitted the one way, datagram transports inherently support which used by swayne cook. The sense if you do this is somewhat. Under which they were specified by declaring the vector may make. It as not be digitally signed like the binding configuration. The states and unordered factors the printing of either rows. In the appropriate interpreter if and that locale. In this and can be ignored, for has. Values are used instead of choice the probability density. There are recognized read only last two http the details see below specify. One mode namely this is used. Look at this will contain a vector of multiple. Wilks you will look at this is quite hard. The character expansion are copied when character. For fitting function takes an expression, so called the object. However a parameter data analysis and, rbind or stem and qqplot. The result is in power convenience and the outer true as many. Functions can reduce the requester. In that are vectors or, data into a figure five values for linear regressions. Like structures are the language and stderr would fit hard to rules. Messages for reliable session concretely, ws rm standard bindings the device will launch a single. Consider the users note that device this. Alternatively ls can remove directory say consisting. The common example it has gone into groups ws rm support whenever you. But the previous commands can be used graphical parameters to specified. Also forms of filepaths and all the receiver. For statistical methods require some rather inflexible.",
"title": ""
},
{
"docid": "af8ddd6792a98ea3b59bdaab7c7fa045",
"text": "This research explores the alternative media ecosystem through a Twitter lens. Over a ten-month period, we collected tweets related to alternative narratives—e.g. conspiracy theories—of mass shooting events. We utilized tweeted URLs to generate a domain network, connecting domains shared by the same user, then conducted qualitative analysis to understand the nature of different domains and how they connect to each other. Our findings demonstrate how alternative news sites propagate and shape alternative narratives, while mainstream media deny them. We explain how political leanings of alternative news sites do not align well with a U.S. left-right spectrum, but instead feature an antiglobalist (vs. globalist) orientation where U.S. Alt-Right sites look similar to U.S. Alt-Left sites. Our findings describe a subsection of the emerging alternative media ecosystem and provide insight in how websites that promote conspiracy theories and pseudo-science may function to conduct underlying political agendas.",
"title": ""
},
{
"docid": "59a32ec5b88436eca75d8fa9aa75951b",
"text": "A visual-relational knowledge graph (KG) is a multi-relational graph whose entities are associated with images. We introduce ImageGraph, a KG with 1,330 relation types, 14,870 entities, and 829,931 images. Visual-relational KGs lead to novel probabilistic query types where images are treated as first-class citizens. Both the prediction of relations between unseen images and multi-relational image retrieval can be formulated as query types in a visual-relational KG. We approach the problem of answering such queries with a novel combination of deep convolutional networks and models for learning knowledge graph embeddings. The resulting models can answer queries such as “How are these two unseen images related to each other?\" We also explore a zero-shot learning scenario where an image of an entirely new entity is linked with multiple relations to entities of an existing KG. The multi-relational grounding of unseen entity images into a knowledge graph serves as the description of such an entity. We conduct experiments to demonstrate that the proposed deep architectures in combination with KG embedding objectives can answer the visual-relational queries efficiently and accurately.",
"title": ""
},
{
"docid": "ab2a73c5bf3c8d7c65cdde282de1b62c",
"text": "Centuries of co-evolution between Castanea spp. biodiversity and human populations has resulted in the spread of rich and varied chestnut genetic diversity throughout most of the world, especially in mountainous and forested regions. Its plasticity and adaptability to different pedoclimates and the wide genetic variability of the species determined the spread of many different ecotypes and varieties in the wild. Throughout the centuries, man has used, selected and preserved these different genotypes, vegetatively propagating them by grafting, for many applications: fresh consumption, production of flour, animal nutrition, timber production, thereby actively contributing to the maintenance of the natural biodiversity of the species, and providing an excellent example of conservation horticulture. Nonetheless, currently the genetic variability of the species is critically endangered and hundreds of ecotypes and varieties are at risk of being lost due to a number of phytosanitary problems (canker blight, Chryphonectria parasitica; ink disease, Phytophthora spp.; gall wasp, Dryocosmus kuriphilus), and because of the many years of decline and abandonment of chestnut cultivation, which resulted in the loss of the binomial male chestnut. Recently, several research and experimentation programmes have attempted to develop strategies for the conservation of chestnut biodiversity. The purpose of this paper is to give an overview of the status of biodiversity conservation of the species and to present the results of a 7 year project aimed at the individuation and study of genetic diversity and conservation of Castanea spp. germplasm.",
"title": ""
},
{
"docid": "f104989b26d60908e76e34794cb420af",
"text": "Energy monitoring and conservation holds prime importance in today's world because of the imbalance between power generation and demand. The current scenario says that the power generated, which is primarily contributed by fossil fuels may get exhausted within the next 20 years. Currently, there are very accurate electronic energy monitoring systems available in the market. Most of these monitor the power consumed in a domestic household, in case of residential applications. Many a times, consumers are dissatisfied with the power bill as it does not show the power consumed at the device level. This paper presents the design and implementation of an energy meter using Arduino microcontroller which can be used to measure the power consumed by any individual electrical appliance. Internet of Things (IoT) is an emerging field and IoT based devices have created a revolution in electronics and IT. The main intention of the proposed energy meter is to monitor the power consumption at the device level, upload it to the server and establish remote control of any appliance. The energy monitoring system precisely calculates the power consumed by various electrical devices and displays it through a home energy monitoring website. The advantage of this device is that a user can understand the power consumed by any electrical appliance from the website and can take further steps to control them and thus help in energy conservation. Further on, the users can monitor the power consumption as well as the bill on daily basis.",
"title": ""
},
{
"docid": "da0c8fa769ac7e33cc81ab9ba72d457d",
"text": "Action quality assessment is crucial in areas of sports, surgery and assembly line where action skills can be evaluated. In this paper, we propose the Segment-based P3D-fused network S3D built-upon ED-TCN and push the performance on the UNLV-Dive dataset by a significant margin. We verify that segment-aware training performs better than full-video training which turns out to focus on the water spray. We show that temporal segmentation can be embedded with few efforts.",
"title": ""
},
{
"docid": "75d57c2f82fb7852feef4c7bcde41590",
"text": "This paper studies the causal impact of sibling gender composition on participation in Science, Technology, Engineering, and Mathematics (STEM) education. I focus on a sample of first-born children who all have a younger biological sibling, using rich administrative data on the total Danish population. The randomness of the secondborn siblings’ gender allows me to estimate the causal effect of having an opposite sex sibling relative to a same sex sibling. The results are robust to family size and show that having a second-born opposite sex sibling makes first-born men more and women less likely to enroll in a STEM program. Although sibling gender composition has no impact on men’s probability of actually completing a STEM degree, it has a powerful effect on women’s success within these fields: women with a younger brother are eleven percent less likely to complete any field-specific STEM education relative to women with a sister. I provide evidence that parents of mixed sex children gender-specialize their parenting more than parents of same sex children. These findings indicate that the family environment plays in important role for shaping interests in STEM fields. JEL classification: I2, J1, J3",
"title": ""
},
{
"docid": "5eb526843c41d2549862b60c17110b5b",
"text": "■ Abstract We explore the social dimension that enables adaptive ecosystem-based management. The review concentrates on experiences of adaptive governance of socialecological systems during periods of abrupt change (crisis) and investigates social sources of renewal and reorganization. Such governance connects individuals, organizations, agencies, and institutions at multiple organizational levels. Key persons provide leadership, trust, vision, meaning, and they help transform management organizations toward a learning environment. Adaptive governance systems often self-organize as social networks with teams and actor groups that draw on various knowledge systems and experiences for the development of a common understanding and policies. The emergence of “bridging organizations” seem to lower the costs of collaboration and conflict resolution, and enabling legislation and governmental policies can support self-organization while framing creativity for adaptive comanagement efforts. A resilient social-ecological system may make use of crisis as an opportunity to transform into a more desired state.",
"title": ""
},
{
"docid": "c2c832689f0bfa9dec0b32203ae355d4",
"text": "Steve Jobs, one of the greatest visionaries of our time was quoted in 1996 saying “a lot of times, people don’t know what they want until you show it to them”[38] indicating he advocated products to be developed based on human intuition rather than research. With the advancements of mobile devices, social networks and the Internet of Things (IoT) enormous amounts of complex data, both structured & unstructured are being captured in hope to allow organizations to make better business decisions as data is now vital for an organizations success. These enormous amounts of data are referred to as Big Data, which enables a competitive advantage over rivals when processed and analyzed appropriately. However Big Data Analytics has a few concerns including Management of Datalifecycle, Privacy & Security, and Data Representation. This paper reviews the fundamental concept of Big Data, the Data Storage domain, the MapReduce programming paradigm used in processing these large datasets, and focuses on two case studies showing the effectiveness of Big Data Analytics and presents how it could be of greater good in the future if handled appropriately. Keywords—Big Data; Big Data Analytics; Big Data Inconsistencies; Data Storage; MapReduce; Knowledge-Space",
"title": ""
},
{
"docid": "936c4fb60d37cce15ed22227d766908f",
"text": "English. The SENTIment POLarity Classification Task 2016 (SENTIPOLC), is a rerun of the shared task on sentiment classification at the message level on Italian tweets proposed for the first time in 2014 for the Evalita evaluation campaign. It includes three subtasks: subjectivity classification, polarity classification, and irony detection. In 2016 SENTIPOLC has been again the most participated EVALITA task with a total of 57 submitted runs from 13 different teams. We present the datasets – which includes an enriched annotation scheme for dealing with the impact on polarity of a figurative use of language – the evaluation methodology, and discuss results and participating systems. Italiano. Descriviamo modalità e risultati della seconda edizione della campagna di valutazione di sistemi di sentiment analysis (SENTIment POLarity Classification Task), proposta nel contesto di “EVALITA 2016: Evaluation of NLP and Speech Tools for Italian”. In SENTIPOLC è stata valutata la capacità dei sistemi di riconoscere diversi aspetti del sentiment espresso nei messaggi Twitter in lingua italiana, con un’articolazione in tre sottotask: subjectivity classification, polarity classification e irony detection. La campagna ha suscitato nuovamente grande interesse, con un totale di 57 run inviati da 13 gruppi di partecipanti.",
"title": ""
},
{
"docid": "79eab4c017b0f1fb382617f72bde19e7",
"text": "To perceive the external environment our brain uses multiple sources of sensory information derived from several different modalities, including vision, touch and audition. All these different sources of information have to be efficiently merged to form a coherent and robust percept. Here we highlight some of the mechanisms that underlie this merging of the senses in the brain. We show that, depending on the type of information, different combination and integration strategies are used and that prior knowledge is often required for interpreting the sensory signals.",
"title": ""
},
{
"docid": "3beb3f808af2a2c04b74416fe1acf630",
"text": "A national survey, based on a probability sample of patients admitted to short-term hospitals in the United States during 1973 to 1974 with a discharge diagnosis of an intracranial neoplasm, was conducted in 157 hospitals. The annual incidence was estimated at 17,000 for primary intracranial neoplasms and 17,400 for secondary intracranial neoplasms--8.2 and 8.3 per 100,000 US population, respectively. Rates of primary intracranial neoplasms increased steadily with advancing age. The age-adjusted rates were higher among men than among women (8.5 versus 7.9 per 100,000). However, although men were more susceptible to gliomas and neuronomas, incidence rates for meningiomas and pituitary adenomas were higher among women.",
"title": ""
},
{
"docid": "019d5deed0ed1e5b50097d5dc9121cb6",
"text": "Within interactive narrative research, agency is largely considered in terms of a player's autonomy in a game, defined as theoretical agency. Rather than in terms of whether or not the player feels they have agency, their perceived agency. An effective interactive narrative needs to provide a player a level of agency that satisfies their desires and must do that without compromising its own structure. Researchers frequently turn to techniques for increasing theoretical agency to accomplish this. This paper proposes an approach to categorize and explore techniques in which a player's level of perceived agency is affected without requiring more or less theoretical agency.",
"title": ""
}
] |
scidocsrr
|
72abe446f3141a86be6009ff5495dbd0
|
Improved Radiation Characteristics of Small Antipodal Vivaldi Antenna for Microwave and Millimeter-Wave Imaging Applications
|
[
{
"docid": "f9e3a402e1b36e27bada5499d958b2a8",
"text": "A miniaturized antipodal Vivaldi antenna to operate from 1 to 30 GHz is designed for nondestructive testing and evaluation of construction materials, such as concrete, polymers, and dielectric composites. A step-by-step procedure has been employed to design and optimize performance of the proposed antenna. First, a conventional antipodal Vivaldi antenna (CAVA) is designed as a reference. Second, the CAVA is shortened to have a small size of the CAVA. Third, to extend the low end of frequency band, the inner edges of the top and bottom radiators of the shortened CAVA have been bent. To enhance gain at lower frequencies, regular slit edge technique is employed. Finally, a half elliptical-shaped dielectric lens as an extension of the antenna substrate is added to the antenna to feature high gain and front-to-back ratio. A prototype of the antenna is employed as a part of the microwave imaging system to detect voids inside concrete specimen. High-range resolution images of voids are achieved by applying synthetic aperture radar algorithm.",
"title": ""
},
{
"docid": "d7065dccb396b0a47526fc14e0a9e796",
"text": "A modified compact antipodal Vivaldi antenna is proposed with good performance for different applications including microwave and millimeter wave imaging. A step-by-step procedure is applied in this design including conventional antipodal Vivaldi antenna (AVA), AVA with a periodic slit edge, and AVA with a trapezoid-shaped dielectric lens to feature performances including wide bandwidth, small size, high gain, front-to-back ratio and directivity, modification on E-plane beam tilt, and small sidelobe levels. By adding periodic slit edge at the outer brim of the antenna radiators, lower-end limitation of the conventional AVA extended twice without changing the overall dimensions of the antenna. The optimized antenna is fabricated and tested, and the results show that S11 <; -10 dB frequency band is from 3.4 to 40 GHz, and it is in good agreement with simulation one. Gain of the antenna has been elevated by the periodic slit edge and the trapezoid dielectric lens at lower frequencies up to 8 dB and at higher frequencies up to 15 dB, respectively. The E-plane beam tilts and sidelobe levels are reduced by the lens.",
"title": ""
}
] |
[
{
"docid": "516153ca56874e4836497be9b7631834",
"text": "Shunt active power filter (SAPF) is the preeminent solution against nonlinear loads, current harmonics, and power quality problems. APF topologies for harmonic compensation use numerous high-power rating components and are therefore disadvantageous. Hybrid topologies combining low-power rating APF with passive filters are used to reduce the power rating of voltage source inverter (VSI). Hybrid APF topologies for high-power rating systems use a transformer with large numbers of passive components. In this paper, a novel four-switch two-leg VSI topology for a three-phase SAPF is proposed for reducing the system cost and size. The proposed topology comprises a two-arm bridge structure, four switches, coupling inductors, and sets of LC PFs. The third leg of the three-phase VSI is removed by eliminating the set of power switching devices, thereby directly connecting the phase with the negative terminals of the dc-link capacitor. The proposed topology enhances the harmonic compensation capability and provides complete reactive power compensation compared with conventional APF topologies. The new experimental prototype is tested in the laboratory to verify the results in terms of total harmonic distortion, balanced supply current, and harmonic compensation, following the IEEE-519 standard.",
"title": ""
},
{
"docid": "d2f5f5b42d732a5d27310e4f2d76116a",
"text": "This paper reports on a cluster analysis of pervasive games through a bottom-up approach based upon 120 game examples. The basis for the clustering algorithm relies on the identification of pervasive gameplay design patterns for each game from a set of 75 possible patterns. The resulting hierarchy presents a view of the design space of pervasive games, and details of clusters and novel gameplay features are described. The paper concludes with a view over how the clusters relate to existing genres and models of pervasive games.",
"title": ""
},
{
"docid": "93625a1cc77929e98a3bdbf30ac16f3a",
"text": "The performance of rasterization-based rendering on current GPUs strongly depends on the abilities to avoid overdraw and to prevent rendering triangles smaller than the pixel size. Otherwise, the rates at which highresolution polygon models can be displayed are affected significantly. Instead of trying to build these abilities into the rasterization-based rendering pipeline, we propose an alternative rendering pipeline implementation that uses rasterization and ray-casting in every frame simultaneously to determine eye-ray intersections. To make ray-casting competitive with rasterization, we introduce a memory-efficient sample-based data structure which gives rise to an efficient ray traversal procedure. In combination with a regular model subdivision, the most optimal rendering technique can be selected at run-time for each part. For very large triangle meshes our method can outperform pure rasterization and requires a considerably smaller memory budget on the GPU. Since the proposed data structure can be constructed from any renderable surface representation, it can also be used to efficiently render isosurfaces in scalar volume fields. The compactness of the data structure allows rendering from GPU memory when alternative techniques already require exhaustive paging.",
"title": ""
},
{
"docid": "383e569dcd1f0c648ad2274588f76961",
"text": "BACKGROUND\nOutcomes are poor for patients with previously treated, advanced or metastatic non-small-cell lung cancer (NSCLC). The anti-programmed death ligand 1 (PD-L1) antibody atezolizumab is clinically active against cancer, including NSCLC, especially cancers expressing PD-L1 on tumour cells, tumour-infiltrating immune cells, or both. We assessed efficacy and safety of atezolizumab versus docetaxel in previously treated NSCLC, analysed by PD-L1 expression levels on tumour cells and tumour-infiltrating immune cells and in the intention-to-treat population.\n\n\nMETHODS\nIn this open-label, phase 2 randomised controlled trial, patients with NSCLC who progressed on post-platinum chemotherapy were recruited in 61 academic medical centres and community oncology practices across 13 countries in Europe and North America. Key inclusion criteria were Eastern Cooperative Oncology Group performance status 0 or 1, measurable disease by Response Evaluation Criteria In Solid Tumors version 1.1 (RECIST v1.1), and adequate haematological and end-organ function. Patients were stratified by PD-L1 tumour-infiltrating immune cell status, histology, and previous lines of therapy, and randomly assigned (1:1) by permuted block randomisation (with a block size of four) using an interactive voice or web system to receive intravenous atezolizumab 1200 mg or docetaxel 75 mg/m(2) once every 3 weeks. Baseline PD-L1 expression was scored by immunohistochemistry in tumour cells (as percentage of PD-L1-expressing tumour cells TC3≥50%, TC2≥5% and <50%, TC1≥1% and <5%, and TC0<1%) and tumour-infiltrating immune cells (as percentage of tumour area: IC3≥10%, IC2≥5% and <10%, IC1≥1% and <5%, and IC0<1%). The primary endpoint was overall survival in the intention-to-treat population and PD-L1 subgroups at 173 deaths. Biomarkers were assessed in an exploratory analysis. We assessed safety in all patients who received at least one dose of study drug. This study is registered with ClinicalTrials.gov, number NCT01903993.\n\n\nFINDINGS\nPatients were enrolled between Aug 5, 2013, and March 31, 2014. 144 patients were randomly allocated to the atezolizumab group, and 143 to the docetaxel group. 142 patients received at least one dose of atezolizumab and 135 received docetaxel. Overall survival in the intention-to-treat population was 12·6 months (95% CI 9·7-16·4) for atezolizumab versus 9·7 months (8·6-12·0) for docetaxel (hazard ratio [HR] 0·73 [95% CI 0·53-0·99]; p=0·04). Increasing improvement in overall survival was associated with increasing PD-L1 expression (TC3 or IC3 HR 0·49 [0·22-1·07; p=0·068], TC2/3 or IC2/3 HR 0·54 [0·33-0·89; p=0·014], TC1/2/3 or IC1/2/3 HR 0·59 [0·40-0·85; p=0·005], TC0 and IC0 HR 1·04 [0·62-1·75; p=0·871]). In our exploratory analysis, patients with pre-existing immunity, defined by high T-effector-interferon-γ-associated gene expression, had improved overall survival with atezolizumab. 11 (8%) patients in the atezolizumab group discontinued because of adverse events versus 30 (22%) patients in the docetaxel group. 16 (11%) patients in the atezolizumab group versus 52 (39%) patients in the docetaxel group had treatment-related grade 3-4 adverse events, and one (<1%) patient in the atezolizumab group versus three (2%) patients in the docetaxel group died from a treatment-related adverse event.\n\n\nINTERPRETATION\nAtezolizumab significantly improved survival compared with docetaxel in patients with previously treated NSCLC. Improvement correlated with PD-L1 immunohistochemistry expression on tumour cells and tumour-infiltrating immune cells, suggesting that PD-L1 expression is predictive for atezolizumab benefit. Atezolizumab was well tolerated, with a safety profile distinct from chemotherapy.\n\n\nFUNDING\nF Hoffmann-La Roche/Genentech Inc.",
"title": ""
},
{
"docid": "1156e19011c722404e077ae64f6e526f",
"text": "Malwares are malignant softwares. It is designed t o amage computer systems without the knowledge of the owner using the system. Softwares from reputabl e vendors also contain malicious code that affects the system or leaks informations to remote servers. Malwares incl udes computer viruses, Worms, spyware, dishonest ad -ware, rootkits, Trojans, dialers etc. Malware is one of t he most serious security threats on the Internet to day. In fact, most Internet problems such as spam e-mails and denial o f service attacks have malwareas their underlying c ause. Computers that are compromised with malware are oft en networked together to form botnets and many atta cks re launched using these malicious, attacker controlled n tworks. The paper focuses on various Malware det ction and removal methods. KeywordsMalware, Intruders, Checksum, Digital Immune System , Behavior blocker",
"title": ""
},
{
"docid": "cdb937def5a92e3843a761f57278783e",
"text": "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.",
"title": ""
},
{
"docid": "1659af1bc0d627609376a65b42fcbd8e",
"text": "The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct \"BL\"-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new \"BL\"-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views.",
"title": ""
},
{
"docid": "9c0057869c5f5d2230991e59471eb0b8",
"text": "Recently, the complexity of modern, real-time computer games has increased drastically. The need for sophisticated game AI, in particular for Non-Player Characters, grows with the demand for realistic games. Writing consistent, re-useable and efficient AI code has become hard. We demonstrate how modeling game AI at an appropriate abstraction level using an appropriate modeling language has many advantages. A variant of Rhapsody Statecharts is proposed as an appropriate formalism. The Tank Wars game by Electronic Arts (EA) is used to demonstrate our concrete approach. We show how the use of the Statecharts formalism leads quite naturally to layered modeling of game AI and allows modelers to abstract away from choices between, for example, time-slicing and discrete-event time management. Finally, our custom tools are used to synthesize efficient C++ code to insert into the Tank Wars main game loop.",
"title": ""
},
{
"docid": "d7ddc51f23fb2e18dfb3582848b087ad",
"text": "Sorting is integral part of many computer based systems and applications, as it involves rearranging information into either ascending or descending order. There are many sorting algorithms like Quick sort, Heap sort, Merge sort, Insertion sort, Selection sort, Bubble sort and Freezing sort. However, efforts have been made to improve the performance of the algorithm in terms of efficiency, indeed a big issue to be considered. Major Emphasis has been placed on complexity by reducing the Number of comparisons, hence reducing complexity. This paper presents new sorting algorithm EIS, "ENHANCED INSERTION SORT". It is basically an enhancement toINSERTION SORT (a kind of Hybrid sorting technique) by making it impressively faster algorithm with O(n)complexity as compared to O(n2) of insertion sort in worst case and less than O(n1. 585) in average case which is much better than insertion sort O(n2). It works flawlessly with huge lists of elements. To prove the effectiveness of the algorithm, the new algorithm is analyzed, implemented, tested and results has been carried out and compared with other major sorting algorithms and the results were promising.",
"title": ""
},
{
"docid": "71171a26d7dc0b1e6c6836b96702a358",
"text": "UNLABELLED\nAssisted reproductive technologies (ART) are one of the most dynamic fields of the medical science. ART beginning on 25th of July 1978 with the pioneering scientific work of P. Steptoe and B. Edwards. Although the ART have improved dramatically since 1978, the success of that treatment remains unsatisfactory. More than 25 years experience in IVF accrued in believe that a break-through in success rates can soon be reach. The guarantee of IVF success remains the good ovarian response, which very often is a challenge for reproductive specialist. The manuscript is a review of the literature about the most successful therapy for controlled ovarian stimulation for IVF in aged infertile women and/or low ovarian responders.\n\n\nCONCLUSIONS\nThe ovarian stimulation with highly purified gonadotrophins (HP-FSH) in outlined group of patient is associated with fewer oocytes retrieval, but higher proportion of top-quality embryos compared with rFSH, an improved capacity to implant, ongoing pregnancy and live birth rate among the top-quality embryos, derived from stimulation with HP-hMG compared with top-quality embryos in the rFSH group. High purified hMG better performed than rFSH in older women and women with poor ovarian response, probably of exogenous LH activity and/or relatively higher acidic isoforms of the FSH protein (produced when lower estrogen levels are present) which may be of relevance for clinical outcome.",
"title": ""
},
{
"docid": "8f0e9f9a3e23e701eae4f3444d933301",
"text": "Reliability is a major concern for memories. To ensure that errors do not affect the data stored in a memory, error correction codes (ECCs) are widely used in memories. ECCs introduce an overhead as some bits are added to each word to detect and correct errors. This increases the cost of the memory. Content addressable memories (CAMs) are a special type of memories in which the input is compared with the data stored, and if a match is found, the output is the address of that word. CAMs are used in many computing and networking applications. In this brief, the specific features of CAMs are used to reduce the cost of implementing ECCs. More precisely, the proposed technique eliminates the need to store the ECC bits for each word in the memory. This is done by embedding those bits into the address of the key. The main potential issue of the new scheme is that it restricts the addresses in which a new key can be stored. Therefore, it can occur that a new key cannot be added into the CAM when there are addresses that are not used. This issue is analyzed and evaluated showing that, for large CAMs, it would only occur when the CAM occupancy is close to 100%. Therefore, the proposed scheme can be used to effectively reduce the cost of implementing ECCs in CAMs.",
"title": ""
},
{
"docid": "daa4ff5d620fc9319cf07d55bd99df3d",
"text": "BACKGROUND\nThe prevalence of depression in older people is high, treatment is inadequate, it creates a substantial burden and is a public health priority for which exercise has been proposed as a therapeutic strategy.\n\n\nAIMS\nTo estimate the effect of exercise on depressive symptoms among older people, and assess whether treatment effect varies depending on the depression criteria used to determine participant eligibility.\n\n\nMETHOD\nSystematic review and meta-analysis of randomised controlled trials of exercise for depression in older people.\n\n\nRESULTS\nNine trials met the inclusion criteria and seven were meta-analysed. Exercise was associated with significantly lower depression severity (standardised mean difference (SMD) = -0.34, 95% CI -0.52 to -0.17), irrespective of whether participant eligibility was determined by clinical diagnosis (SMD = -0.38, 95% CI -0.67 to -0.10) or symptom checklist (SMD = -0.34, 95% CI -0.62 to -0.06). Results remained significant in sensitivity analyses.\n\n\nCONCLUSIONS\nOur findings suggest that, for older people who present with clinically meaningful symptoms of depression, prescribing structured exercise tailored to individual ability will reduce depression severity.",
"title": ""
},
{
"docid": "ffbab4b090448de06ff5237d43c5e293",
"text": "Motivated by a project to create a system for people who are deaf or hard-of-hearing that would use automatic speech recognition (ASR) to produce real-time text captions of spoken English during in-person meetings with hearing individuals, we have augmented a transcript of the Switchboard conversational dialogue corpus with an overlay of word-importance annotations, with a numeric score for each word, to indicate its importance to the meaning of each dialogue turn. Further, we demonstrate the utility of this corpus by training an automatic word importance labeling model; our best performing model has an F-score of 0.60 in an ordinal 6-class word-importance classification task with an agreement (concordance correlation coefficient) of 0.839 with the human annotators (agreement score between annotators is 0.89). Finally, we discuss our intended future applications of this resource, particularly for the task of evaluating ASR performance, i.e. creating metrics that predict ASR-output caption text usability for DHH users better than Word Error Rate (WER).",
"title": ""
},
{
"docid": "2970f641a9a9b71421783c929d4c8430",
"text": "An electron linear accelerator system with several novel features has been developed for radiation therapy. The beam from a 25 cell S-band standing wave structure, operated in the ¿/2 mode with on-axis couplers, is reflected in an achromatic isochronous magnet and reinjected into the accelerator. The second pass doubles the energy while conserving rf power and minimizing the overall length of the unit. The beam is then transported through an annular electron gun and bent into the collimator by an innovative two-element doubly achromatic doubly focusing 270° magnet which allows a significant reduction in unit height. The energy is reduced by adjusting the position of the reflecting magnet with respect to the accelerator. The system generates 5 Gy m2min-1 beams of 25 MV photons and 5 to 25 MeV electrons. Extensive use of tungsten shielding minimizes neutron leakage. The photon mode surface dose is reduced by a carefully optimized electron filter. An improved scanning system gives exceptionally low electron -mode photon contamination.",
"title": ""
},
{
"docid": "08260ba76f242725b8a08cbd8e4ec507",
"text": "Vocal singing (singing with lyrics) shares features common to music and language but it is not clear to what extent they use the same brain systems, particularly at the higher cortical level, and how this varies with expertise. Twenty-six participants of varying singing ability performed two functional imaging tasks. The first examined covert generative language using orthographic lexical retrieval while the second required covert vocal singing of a well-known song. The neural networks subserving covert vocal singing and language were found to be proximally located, and their extent of cortical overlap varied with singing expertise. Nonexpert singers showed greater engagement of their language network during vocal singing, likely accounting for their less tuneful performance. In contrast, expert singers showed a more unilateral pattern of activation associated with reduced engagement of the right frontal lobe. The findings indicate that singing expertise promotes independence from the language network with decoupling producing more tuneful performance. This means that the age-old singing practice of 'finding your singing voice' may be neurologically mediated by changing how strongly singing is coupled to the language system.",
"title": ""
},
{
"docid": "7dc91007172f8f36ae3cf0281746a4f2",
"text": "The tourist trip design problem (TTDP) refers to a route-planning problem for tourists interested in visiting multiple points of interest (POIs). TTDP solvers derive daily tourist tours, i.e., ordered visits to POIs, which respect tourist constraints and POIs attributes. The main objective of the problem discussed is to select POIs that match tourist preferences, thereby maximizing tourist satisfaction, while taking into account a multitude of parameters and constraints (e.g., distances among POIs, visiting time required for each POI, POIs visiting days/hours, entrance fees, weather conditions) and respecting the time available for sightseeing on a daily basis. The aim of this work is to survey models, algorithmic approaches and methodologies concerning tourist trip design problems. Recent approaches are examined, focusing on problem models that best capture a multitude of realistic POIs attributes and user constraints; further, several interesting TTDP variants are investigated. Open issues and promising prospects in tourist trip planning research are also discussed.",
"title": ""
},
{
"docid": "bdc6ff2ed295039bb9d86944c49fff13",
"text": "The problem of maximizing influence spread has been widely studied in social networks, because of its tremendous number of applications in determining critical points in a social network for information dissemination. All the techniques proposed in the literature are inherently static in nature, which are designed for social networks with a fixed set of links. However, many forms of social interactions are transient in nature, with relatively short periods of interaction. Any influence spread may happen only during the period of interaction, and the probability of spread is a function of the corresponding interaction time. Furthermore, such interactions are quite fluid and evolving, as a result of which the topology of the underlying network may change rapidly, as new interactions form and others terminate. In such cases, it may be desirable to determine the influential nodes based on the dynamic interaction patterns. Alternatively, one may wish to discover the most likely starting points for a given infection pattern. We will propose methods which can be used both for optimization of information spread, as well as the backward tracing of the source of influence spread. We will present experimental results illustrating the effectiveness of our approach on a number of real data sets.",
"title": ""
},
{
"docid": "0eff90e073f09e5bc0f298fba512abd4",
"text": "The issue of handwritten character recognition is still a big challenge to the scientific community. Several approaches to address this challenge have been attempted in the last years, mostly focusing on the English pre-printed or handwritten characters space. Thus, the need to attempt a research related to Arabic handwritten text recognition. Algorithms based on neural networks have proved to give better results than conventional methods when applied to problems where the decision rules of the classification problem are not clearly defined. Two neural networks were built to classify already segmented characters of handwritten Arabic text. The two neural networks correctly recognized 73% of the characters. However, one hurdle was encountered in the above scenario, which can be summarized as follows: there are a lot of handwritten characters that can be segmented and classified into two or more different classes depending on whether they are looked at separately, or in a word, or even in a sentence. In other words, character classification, especially handwritten Arabic characters, depends largely on contextual information, not only on topographic features extracted from these characters.",
"title": ""
},
{
"docid": "b342443400c85277d4f980a39198ded0",
"text": "We present several optimizations to SPHINCS, a stateless hash-based signature scheme proposed by Bernstein et al. in 2015: PORS, a more secure variant of the HORS few-time signature scheme used in SPHINCS; secret key caching, to speed-up signing and reduce signature size; batch signing, to amortize signature time and reduce signature size when signing multiple messages at once; mask-less constructions to reduce the key size and simplify the scheme; and Octopus, a technique to eliminate redundancies from authentication paths in Merkle trees. Based on a refined analysis of the subset resilience problem, we show that SPHINCS’ parameters can be modified to reduce the signature size while retaining a similar security level and computation time. We then propose Gravity-SPHINCS, our variant of SPHINCS embodying the aforementioned tricks. Gravity-SPHINCS has shorter keys (32 and 64 bytes instead of ≈ 1 KB), shorter signatures (≈ 30 KB instead of 41 KB), and faster signing and verification for the same security level as SPHINCS.",
"title": ""
}
] |
scidocsrr
|
950fed6fd9bcb1fe8cfc00a83eda7668
|
MQTT based vehicle accident detection and alert system
|
[
{
"docid": "8cdc70a728191aa25789c6284d581dc0",
"text": "The objective of the smart helmet is to provide a means and apparatus for detecting and reporting accidents. Sensors, Wi-Fi enabled processor, and cloud computing infrastructures are utilised for building the system. The accident detection system communicates the accelerometer values to the processor which continuously monitors for erratic variations. When an accident occurs, the related details are sent to the emergency contacts by utilizing a cloud based service. The vehicle location is obtained by making use of the global positioning system. The system promises a reliable and quick delivery of information relating to the accident in real time and is appropriately named Konnect. Thus, by making use of the ubiquitous connectivity which is a salient feature for the smart cities, a smart helmet for accident detection is built.",
"title": ""
},
{
"docid": "39b072a5adb75eb43561017d53ab6f44",
"text": "The Internet of Things (IoT) is converting the agriculture industry and solving the immense problems or the major challenges faced by the farmers todays in the field. India is one of the 13th countries in the world having scarcity of water resources. Due to ever increasing of world population, we are facing difficulties in the shortage of water resources, limited availability of land, difficult to manage the costs while meeting the demands of increasing consumption needs of a global population that is expected to grow by 70% by the year 2050. The influence of population growth on agriculture leads to a miserable impact on the farmers livelihood. To overcome the problems we design a low cost system for monitoring the agriculture farm which continuously measure the level of soil moisture of the plants and alert the farmers if the moisture content of particular plants is low via sms or an email. This system uses an esp8266 microcontroller and a moisture sensor using Losant platform. Losant is a simple and most powerful IoT cloud platform for the development of coming generation. It offers the real time data visualization of sensors data which can be operate from any part of the world irrespective of the position of field.",
"title": ""
}
] |
[
{
"docid": "4f527bddf622c901a7894ce7cc381ee1",
"text": "Most popular programming languages support situations where a value of one type is converted into a value of another type without any explicit cast. Such implicit type conversions, or type coercions, are a highly controversial language feature. Proponents argue that type coercions enable writing concise code. Opponents argue that type coercions are error-prone and that they reduce the understandability of programs. This paper studies the use of type coercions in JavaScript, a language notorious for its widespread use of coercions. We dynamically analyze hundreds of programs, including real-world web applications and popular benchmark programs. We find that coercions are widely used (in 80.42% of all function executions) and that most coercions are likely to be harmless (98.85%). Furthermore, we identify a set of rarely occurring and potentially harmful coercions that safer subsets of JavaScript or future language designs may want to disallow. Our results suggest that type coercions are significantly less evil than commonly assumed and that analyses targeted at real-world JavaScript programs must consider coercions. 1998 ACM Subject Classification D.3.3 Language Constructs and Features, F.3.2 Semantics of Programming Languages, D.2.8 Metrics",
"title": ""
},
{
"docid": "bc8592866537b13cac47abe621a90d03",
"text": "In the previous paper Ralph Brodd and Martin Winter described the different kinds of batteries and fuel cells. In this paper I will describe lithium batteries in more detail, building an overall foundation for the papers that follow which describe specific components in some depth and usually with an emphasis on the materials behavior. The lithium battery industry is undergoing rapid expansion, now representing the largest segment of the portable battery industry and dominating the computer, cell phone, and camera power source industry. However, the present secondary batteries use expensive components, which are not in sufficient supply to allow the industry to grow at the same rate in the next decade. Moreover, the safety of the system is questionable for the large-scale batteries needed for hybrid electric vehicles (HEV). Another battery need is for a high-power system that can be used for power tools, where only the environmentally hazardous Ni/ Cd battery presently meets the requirements. A battery is a transducer that converts chemical energy into electrical energy and vice versa. It contains an anode, a cathode, and an electrolyte. The anode, in the case of a lithium battery, is the source of lithium ions. The cathode is the sink for the lithium ions and is chosen to optimize a number of parameters, discussed below. The electrolyte provides for the separation of ionic transport and electronic transport, and in a perfect battery the lithium ion transport number will be unity in the electrolyte. The cell potential is determined by the difference between the chemical potential of the lithium in the anode and cathode, ∆G ) -EF. As noted above, the lithium ions flow through the electrolyte whereas the electrons generated from the reaction, Li ) Li+ + e-, go through the external circuit to do work. Thus, the electrode system must allow for the flow of both lithium ions and electrons. That is, it must be both a good ionic conductor and an electronic conductor. As discussed below, many electrochemically active materials are not good electronic conductors, so it is necessary to add an electronically conductive material such as carbon * To whom correspondence should be addressed. Phone and fax: (607) 777-4623. E-mail: [email protected]. 4271 Chem. Rev. 2004, 104, 4271−4301",
"title": ""
},
{
"docid": "59e02bc986876edc0ee0a97fd4d12a28",
"text": "CONTEXT\nSocial anxiety disorder is thought to involve emotional hyperreactivity, cognitive distortions, and ineffective emotion regulation. While the neural bases of emotional reactivity to social stimuli have been described, the neural bases of emotional reactivity and cognitive regulation during social and physical threat, and their relationship to social anxiety symptom severity, have yet to be investigated.\n\n\nOBJECTIVE\nTo investigate behavioral and neural correlates of emotional reactivity and cognitive regulation in patients and controls during processing of social and physical threat stimuli.\n\n\nDESIGN\nParticipants were trained to implement cognitive-linguistic regulation of emotional reactivity induced by social (harsh facial expressions) and physical (violent scenes) threat while undergoing functional magnetic resonance imaging and providing behavioral ratings of negative emotion experience.\n\n\nSETTING\nAcademic psychology department.\n\n\nPARTICIPANTS\nFifteen adults with social anxiety disorder and 17 demographically matched healthy controls.\n\n\nMAIN OUTCOME MEASURES\nBlood oxygen level-dependent signal and negative emotion ratings.\n\n\nRESULTS\nBehaviorally, patients reported greater negative emotion than controls during social and physical threat but showed equivalent reduction in negative emotion following cognitive regulation. Neurally, viewing social threat resulted in greater emotion-related neural responses in patients than controls, with social anxiety symptom severity related to activity in a network of emotion- and attention-processing regions in patients only. Viewing physical threat produced no between-group differences. Regulation during social threat resulted in greater cognitive and attention regulation-related brain activation in controls compared with patients. Regulation during physical threat produced greater cognitive control-related response (ie, right dorsolateral prefrontal cortex) in patients compared with controls.\n\n\nCONCLUSIONS\nCompared with controls, patients demonstrated exaggerated negative emotion reactivity and reduced cognitive regulation-related neural activation, specifically for social threat stimuli. These findings help to elucidate potential neural mechanisms of emotion regulation that might serve as biomarkers for interventions for social anxiety disorder.",
"title": ""
},
{
"docid": "af9e3268901a46967da226537eba3cb6",
"text": "Magnetic Resonance Imaging (MRI) is a non-invasive diagnostic tool very frequently used for brain 8 imaging. The classification of MRI images of normal and pathological brain conditions pose a challenge from 9 technological and clinical point of view, since MR imaging focuses on soft tissue anatomy and generates a large 10 information set and these can act as a mirror reflecting the conditions of the brain. A new approach by 11 integrating wavelet entropy based spider web plots and probabilistic neural network is proposed for the 12 classification of MRI brain images. The two step method for classification uses (1) wavelet entropy based spider 13 web plots for the feature extraction and (2) probabilistic neural network for the classification. The spider web 14 plot is a geometric construction drawn using the entropy of the wavelet approximation components and the areas 15 calculated are used as feature set for classification. Probabilistic neural network provides a general solution to 16 the pattern classification problems and the classification accuracy is found to be 100%. 17 Keywords-Magnetic Resonance Imaging (MRI), Wavelet Transformation, Entropy, Spider Web Plots, 18 Probabilistic Neural Network 19",
"title": ""
},
{
"docid": "cebcd53ef867abb158445842cd0f4daf",
"text": "Let [ be a random variable over a finite set with an arbitrary probability distribution. In this paper we make improvements to a fast method of generating sample values for ( in constant time.",
"title": ""
},
{
"docid": "9df51d2e5755caa355869dacb90544c2",
"text": "Deep learning frameworks have been widely deployed on GPU servers for deep learning applications in both academia and industry. In training deep neural networks (DNNs), there are many standard processes or algorithms, such as convolution and stochastic gradient descent (SGD), but the running performance of different frameworks might be different even running the same deep model on the same GPU hardware. In this study, we evaluate the running performance of four state-of-the-art distributed deep learning frameworks (i.e., Caffe-MPI, CNTK, MXNet, and TensorFlow) over single-GPU, multi-GPU, and multi-node environments. We first build performance models of standard processes in training DNNs with SGD, and then we benchmark the running performance of these frameworks with three popular convolutional neural networks (i.e., AlexNet, GoogleNet and ResNet-50), after that, we analyze what factors that result in the performance gap among these four frameworks. Through both analytical and experimental analysis, we identify bottlenecks and overheads which could be further optimized. The main contribution is that the proposed performance models and the analysis provide further optimization directions in both algorithmic design and system configuration.",
"title": ""
},
{
"docid": "912fb50be7a37154259ad3d7f5c4194f",
"text": "This paper presents a novel single-ended disturb-free 9T subthreshold SRAM cell with cross-point data-aware Write word-line structure. The disturb-free feature facilitates bit-interleaving architecture, which can reduce multiple-bit upsets in a single word and enhance soft error immunity by employing Error Checking and Correction (ECC) technique. The proposed 9T SRAM cell is demonstrated by a 72 Kb SRAM macro with a Negative Bit-Line (NBL) Write-assist and an adaptive Read operation timing tracing circuit implemented in 65 nm low-leakage CMOS technology. Measured full Read and Write functionality is error free with VDD down to 0.35 V ( 0.15 V lower than the threshold voltage) with 229 KHz frequency and 4.05 μW power. Data is held down to 0.275 V with 2.29 μW Standby power. The minimum energy per operation is 4.5 pJ at 0.5 V. The 72 Kb SRAM macro has wide operation range from 1.2 V down to 0.35 V, with operating frequency of around 200 MHz for VDD around/above 1.0 V.",
"title": ""
},
{
"docid": "7f6e03069810f9d7ef68d6a775b8849b",
"text": "For more than a century, the déjà vu experience has been examined through retrospective surveys, prospective surveys, and case studies. About 60% of the population has experienced déjà vu, and its frequency decreases with age. Déjà vu appears to be associated with stress and fatigue, and it shows a positive relationship with socioeconomic level and education. Scientific explanations of déjà vu fall into 4 categories: dual processing (2 cognitive processes momentarily out of synchrony), neurological (seizure, disruption in neuronal transmission), memory (implicit familiarity of unrecognized stimuli),and attentional (unattended perception followed by attended perception). Systematic research is needed on the prevalence and etiology of this culturally familiar cognitive experience, and several laboratory models may help clarify this illusion of recognition.",
"title": ""
},
{
"docid": "304b4cee4006e87fc4172a3e9de88ed1",
"text": "Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs—a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DIFFPOOL yields an average improvement of 5–10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.",
"title": ""
},
{
"docid": "68c1aa2e3d476f1f24064ed6f0f07fb7",
"text": "Granuloma annulare is a benign, asymptomatic, self-limited papular eruption found in patients of all ages. The primary skin lesion usually is grouped papules in an enlarging annular shape, with color ranging from flesh-colored to erythematous. The two most common types of granuloma annulare are localized, which typically is found on the lateral or dorsal surfaces of the hands and feet; and disseminated, which is widespread. Localized disease generally is self-limited and resolves within one to two years, whereas disseminated disease lasts longer. Because localized granuloma annulare is self-limited, no treatment other than reassurance may be necessary. There are no well-designed randomized controlled trials of the treatment of granuloma annulare. Treatment recommendations are based on the pathophysiology of the disease, expert opinion, and case reports only. Liquid nitrogen, injected steroids, or topical steroids under occlusion have been recommended for treatment of localized disease. Disseminated granuloma annulare may be treated with one of several systemic therapies such as dapsone, retinoids, niacinamide, antimalarials, psoralen plus ultraviolet A therapy, fumaric acid esters, tacrolimus, and pimecrolimus. Consultation with a dermatologist is recommended because of the possible toxicities of these agents.",
"title": ""
},
{
"docid": "717bea69015f1c2e9f9909c3510c825a",
"text": "To assess the impact of anti-vaccine movements that targeted pertussis whole-cell vaccines, we compared pertussis incidence in countries where high coverage with diphtheria-tetanus-pertussis vaccines (DTP) was maintained (Hungary, the former East Germany, Poland, and the USA) with countries where immunisation was disrupted by anti-vaccine movements (Sweden, Japan, UK, The Russian Federation, Ireland, Italy, the former West Germany, and Australia). Pertussis incidence was 10 to 100 times lower in countries where high vaccine coverage was maintained than in countries where immunisation programs were compromised by anti-vaccine movements. Comparisons of neighbouring countries with high and low vaccine coverage further underscore the efficacy of these vaccines. Given the safety and cost-effectiveness of whole-cell pertussis vaccines, our study shows that, far from being obsolete, these vaccines continue to have an important role in global immunisation.",
"title": ""
},
{
"docid": "c83eefbe2eadfee71db7faf0238c5023",
"text": "Successful prosthesis use is largely dependent on providing patients with high-quality, individualized pre-prosthetic training, ideally completed under the supervision of a trained therapist. Computer-based training systems, or ‘virtual coaches,’ designed to augment rehabilitation training protocols are an emerging area of research and could be a convenient and low-cost alternative to supplement the therapy received by the patient. In this contribution we completed an iterative needs focus group to determine important design elements required for an effective virtual coach software package.",
"title": ""
},
{
"docid": "44380ea0107c22d3f6412456f4533482",
"text": "Shadow memory is used by dynamic program analysis tools to store metadata for tracking properties of application memory. The efficiency of mapping between application memory and shadow memory has substantial impact on the overall performance of such analysis tools. However, traditional memory mapping schemes that work well on 32-bit architectures cannot easily port to 64-bit architectures due to the much larger 64-bit address space.\n This paper presents EMS64, an efficient memory shadowing scheme for 64-bit architectures. By taking advantage of application reference locality and unused regions in the 64-bit address space, EMS64 provides a fast and flexible memory mapping scheme without relying on any underlying platform features or requiring any specific shadow memory size. Our experiments show that EMS64 is able to reduce the runtime shadow memory translation overhead to 81% on average, which almost halves the overhead of the fastest 64-bit shadow memory system we are aware of.",
"title": ""
},
{
"docid": "52d6711ebbafd94ab5404e637db80650",
"text": "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.",
"title": ""
},
{
"docid": "50f896bba89b1906229c5c9800c8ea7b",
"text": "Intra-regional South-South medical tourism is a vastly understudied subject despite its significance in many parts of the Global South. This paper takes issue with the conventional notion of South Africa purely as a high-end \"surgeon and safari\" destination for medical tourists from the Global North. It argues that South-South movement to South Africa for medical treatment is far more significant, numerically and financially, than North-South movement. The general lack of access to medical diagnosis and treatment in SADC countries has led to a growing temporary movement of people across borders to seek help at South African institutions in border towns and in the major cities. These movements are both formal (institutional) and informal (individual) in nature. In some cases, patients go to South Africa for procedures that are not offered in their own countries. In others, patients are referred by doctors and hospitals to South African facilities. But the majority of the movement is motivated by lack of access to basic healthcare at home. The high demand and large informal flow of patients from countries neighbouring South Africa has prompted the South African government to try and formalise arrangements for medical travel to its public hospitals and clinics through inter-country agreements in order to recover the cost of treating non-residents. The danger, for 'disenfranchised' medical tourists who fall outside these agreements, is that medical xenophobia in South Africa may lead to increasing exclusion and denial of treatment. Medical tourism in this region and South-South medical tourism in general are areas that require much additional research.",
"title": ""
},
{
"docid": "e0f66f533c0af19126565160ff423949",
"text": "Antibiotic resistance, prompted by the overuse of antimicrobial agents, may arise from a variety of mechanisms, particularly horizontal gene transfer of virulence and antibiotic resistance genes, which is often facilitated by biofilm formation. The importance of phenotypic changes seen in a biofilm, which lead to genotypic alterations, cannot be overstated. Irrespective of if the biofilm is single microbe or polymicrobial, bacteria, protected within a biofilm from the external environment, communicate through signal transduction pathways (e.g., quorum sensing or two-component systems), leading to global changes in gene expression, enhancing virulence, and expediting the acquisition of antibiotic resistance. Thus, one must examine a genetic change in virulence and resistance not only in the context of the biofilm but also as inextricably linked pathologies. Observationally, it is clear that increased virulence and the advent of antibiotic resistance often arise almost simultaneously; however, their genetic connection has been relatively ignored. Although the complexities of genetic regulation in a multispecies community may obscure a causative relationship, uncovering key genetic interactions between virulence and resistance in biofilm bacteria is essential to identifying new druggable targets, ultimately providing a drug discovery and development pathway to improve treatment options for chronic and recurring infection.",
"title": ""
},
{
"docid": "a3e383cb19c97af5a4e501c7b13d9088",
"text": "Rapid diagnosis and treatment of acute neurological illnesses such as stroke, hemorrhage, and hydrocephalus are critical to achieving positive outcomes and preserving neurologic function—‘time is brain’1–5. Although these disorders are often recognizable by their symptoms, the critical means of their diagnosis is rapid imaging6–10. Computer-aided surveillance of acute neurologic events in cranial imaging has the potential to triage radiology workflow, thus decreasing time to treatment and improving outcomes. Substantial clinical work has focused on computer-assisted diagnosis (CAD), whereas technical work in volumetric image analysis has focused primarily on segmentation. 3D convolutional neural networks (3D-CNNs) have primarily been used for supervised classification on 3D modeling and light detection and ranging (LiDAR) data11–15. Here, we demonstrate a 3D-CNN architecture that performs weakly supervised classification to screen head CT images for acute neurologic events. Features were automatically learned from a clinical radiology dataset comprising 37,236 head CTs and were annotated with a semisupervised natural-language processing (NLP) framework16. We demonstrate the effectiveness of our approach to triage radiology workflow and accelerate the time to diagnosis from minutes to seconds through a randomized, double-blinded, prospective trial in a simulated clinical environment. A deep-learning algorithm is developed to provide rapid and accurate diagnosis of clinical 3D head CT-scan images to triage and prioritize urgent neurological events, thus potentially accelerating time to diagnosis and care in clinical settings.",
"title": ""
},
{
"docid": "75b0a7b0fa0320a3666fb147471dd45f",
"text": "Maximum power densities by air-driven microbial fuel cells (MFCs) are considerably influenced by cathode performance. We show here that application of successive polytetrafluoroethylene (PTFE) layers (DLs), on a carbon/PTFE base layer, to the air-side of the cathode in a single chamber MFC significantly improved coulombic efficiencies (CEs), maximum power densities, and reduced water loss (through the cathode). Electrochemical tests using carbon cloth electrodes coated with different numbers of DLs indicated an optimum increase in the cathode potential of 117 mV with four-DLs, compared to a <10 mV increase due to the carbon base layer alone. In MFC tests, four-DLs was also found to be the optimum number of coatings, resulting in a 171% increase in the CE (from 19.1% to 32%), a 42% increase in the maximum power density (from 538 to 766 mW m ), and measurable water loss was prevented. The increase in CE due is believed to result from the increased power output and the increased operation time (due to a reduction in aerobic degradation of substrate sustained by oxygen diffusion through the cathode). 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f676c503bcf59a8916995a6db3908792",
"text": "Bone tissue engineering has been increasingly studied as an alternative approach to bone defect reconstruction. In this approach, new bone cells are stimulated to grow and heal the defect with the aid of a scaffold that serves as a medium for bone cell formation and growth. Scaffolds made of metallic materials have preferably been chosen for bone tissue engineering applications where load-bearing capacities are required, considering the superior mechanical properties possessed by this type of materials to those of polymeric and ceramic materials. The space holder method has been recognized as one of the viable methods for the fabrication of metallic biomedical scaffolds. In this method, temporary powder particles, namely space holder, are devised as a pore former for scaffolds. In general, the whole scaffold fabrication process with the space holder method can be divided into four main steps: (i) mixing of metal matrix powder and space-holding particles; (ii) compaction of granular materials; (iii) removal of space-holding particles; (iv) sintering of porous scaffold preform. In this review, detailed procedures in each of these steps are presented. Technical challenges encountered during scaffold fabrication with this specific method are addressed. In conclusion, strategies are yet to be developed to address problematic issues raised, such as powder segregation, pore inhomogeneity, distortion of pore sizes and shape, uncontrolled shrinkage and contamination.",
"title": ""
},
{
"docid": "1d82d994635a0bd0137febd74b8c3835",
"text": "research A. Agrawal J. Basak V. Jain R. Kothari M. Kumar P. A. Mittal N. Modani K. Ravikumar Y. Sabharwal R. Sureka Marketing decisions are typically made on the basis of research conducted using direct mailings, mall intercepts, telephone interviews, focused group discussion, and the like. These methods of marketing research can be time-consuming and expensive, and can require a large amount of effort to ensure accurate results. This paper presents a novel approach for conducting online marketing research based on several concepts such as active learning, matched control and experimental groups, and implicit and explicit experiments. These concepts, along with the opportunity provided by the increasing numbers of online shoppers, enable rapid, systematic, and cost-effective marketing research.",
"title": ""
}
] |
scidocsrr
|
0f5f826dced62cc765fa9d8b491c14d9
|
Big Data for Industry 4.0: A Conceptual Framework
|
[
{
"docid": "b206a5f5459924381ef6c46f692c7052",
"text": "The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture, briey sketch how new nodes can be incorporated, and highlight some of the new features of version 2.0.",
"title": ""
}
] |
[
{
"docid": "160fefce1158a9a70a61869d54c4c39a",
"text": "We present a new approach for efficient approximate nearest neighbor (ANN) search in high dimensional spaces, extending the idea of Product Quantization. We propose a two level product and vector quantization tree that reduces the number of vector comparisons required during tree traversal. Our approach also includes a novel highly parallelizable re-ranking method for candidate vectors by efficiently reusing already computed intermediate values. Due to its small memory footprint during traversal the method lends itself to an efficient, parallel GPU implementation. This Product Quantization Tree (PQT) approach significantly outperforms recent state of the art methods for high dimensional nearest neighbor queries on standard reference datasets. Ours is the first work that demonstrates GPU performance superior to CPU performance on high dimensional, large scale ANN problems in time-critical real-world applications, like loop-closing in videos.",
"title": ""
},
{
"docid": "b1a0a76e73aa5b0a893e50b2fadf0ad2",
"text": "The field of occupational therapy, as with all facets of health care, has been profoundly affected by the changing climate of health care delivery. The combination of cost-effectiveness and quality of care has become the benchmark for and consequent drive behind the rise of managed health care delivery systems. The spawning of outcomes research is in direct response to the need for comparative databases to provide results of effectiveness in health care treatment protocols, evaluations of health-related quality of life, and cost containment measures. Outcomes management is the application of outcomes research data by all levels of health care providers. The challenges facing occupational therapists include proving our value in an economic trend of downsizing, competing within the medical profession, developing and affiliating with new payer sources, and reengineering our careers to meet the needs of the new, nontraditional health care marketplace.",
"title": ""
},
{
"docid": "3f5083aca7cb8952ba5bf421cb34fab6",
"text": "Thyroid gland is butterfly shaped organ which consists of two cone lobes and belongs to the endocrine system. It lies in front of the neck below the adams apple. Thyroid disorders are some kind of abnormalities in thyroid gland which can give rise to nodules like hypothyroidism, hyperthyroidism, goiter, benign and malignant etc. Ultrasound (US) is one among the hugely used modality to detect the thyroid disorders because it has some benefits over other techniques like non-invasiveness, low cost, free of ionizing radiations etc. This paper provides a concise overview about segmentation of thyroid nodules and importance of neural networks comparative to other techniques.",
"title": ""
},
{
"docid": "3c4a8623330c48558ca178a82b68f06c",
"text": "Humans assimilate information from the traffic environment mainly through visual perception. Obviously, the dominant information required to conduct a vehicle can be acquired with visual sensors. However, in contrast to most other sensor principles, video signals contain relevant information in a highly indirect manner and hence visual sensing requires sophisticated machine vision and image understanding techniques. This paper provides an overview on the state of research in the field of machine vision for intelligent vehicles. The functional spectrum addressed covers the range from advanced driver assistance systems to autonomous driving. The organization of the article adopts the typical order in image processing pipelines that successively condense the rich information and vast amount of data in video sequences. Data-intensive low-level “early vision” techniques first extract features that are later grouped and further processed to obtain information of direct relevance for vehicle guidance. Recognition and classification schemes allow to identify specific objects in a traffic scene. Recently, semantic labeling techniques using convolutional neural networks have achieved impressive results in this field. High-level decisions of intelligent vehicles are often influenced by map data. The emerging role of machine vision in the mapping and localization process is illustrated at the example of autonomous driving. Scene representation methods are discussed that organize the information from all sensors and data sources and thus build the interface between perception and planning. Recently, vision benchmarks have been tailored to various tasks in traffic scene perception that provide a metric for the rich diversity of machine vision methods. Finally, the paper addresses computing architectures suited to real-time implementation. Throughout the paper, numerous specific examples and real world experiments with prototype vehicles are presented.",
"title": ""
},
{
"docid": "3adb2815bceb4a3bf11e5d3a595ac098",
"text": "Orientation estimation using low cost sensors is an important task for Micro Aerial Vehicles (MAVs) in order to obtain a good feedback for the attitude controller. The challenges come from the low accuracy and noisy data of the MicroElectroMechanical System (MEMS) technology, which is the basis of modern, miniaturized inertial sensors. In this article, we describe a novel approach to obtain an estimation of the orientation in quaternion form from the observations of gravity and magnetic field. Our approach provides a quaternion estimation as the algebraic solution of a system from inertial/magnetic observations. We separate the problems of finding the \"tilt\" quaternion and the heading quaternion in two sub-parts of our system. This procedure is the key for avoiding the impact of the magnetic disturbances on the roll and pitch components of the orientation when the sensor is surrounded by unwanted magnetic flux. We demonstrate the validity of our method first analytically and then empirically using simulated data. We propose a novel complementary filter for MAVs that fuses together gyroscope data with accelerometer and magnetic field readings. The correction part of the filter is based on the method described above and works for both IMU (Inertial Measurement Unit) and MARG (Magnetic, Angular Rate, and Gravity) sensors. We evaluate the effectiveness of the filter and show that it significantly outperforms other common methods, using publicly available datasets with ground-truth data recorded during a real flight experiment of a micro quadrotor helicopter.",
"title": ""
},
{
"docid": "441633276271b94dc1bd3e5e28a1014d",
"text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.",
"title": ""
},
{
"docid": "52cde6191c79d085127045a62deacf31",
"text": "Deep Reinforcement Learning methods have achieved state of the art performance in learning control policies for the games in the Atari 2600 domain. One of the important parameters in the Arcade Learning Environment (ALE, [Bellemare et al., 2013]) is the frame skip rate. It decides the granularity at which agents can control game play. A frame skip value of k allows the agent to repeat a selected action k number of times. The current state of the art architectures like Deep QNetwork (DQN, [Mnih et al., 2015]) and Dueling Network Architectures (DuDQN, [Wang et al., 2015]) consist of a framework with a static frame skip rate, where the action output from the network is repeated for a fixed number of frames regardless of the current state. In this paper, we propose a new architecture, Dynamic Frame skip Deep Q-Network (DFDQN) which makes the frame skip rate a dynamic learnable parameter. This allows us to choose the number of times an action is to be repeated based on the current state. We show empirically that such a setting improves the performance on relatively harder games like Seaquest.",
"title": ""
},
{
"docid": "374ee37f61ec6ff27e592c6a42ee687f",
"text": "Leaf vein forms the basis of leaf characterization and classification. Different species have different leaf vein patterns. It is seen that leaf vein segmentation will help in maintaining a record of all the leaves according to their specific pattern of veins thus provide an effective way to retrieve and store information regarding various plant species in database as well as provide an effective means to characterize plants on the basis of leaf vein structure which is unique for every species. The algorithm proposes a new way of segmentation of leaf veins with the use of Odd Gabor filters and the use of morphological operations for producing a better output. The Odd Gabor filter gives an efficient output and is robust and scalable as compared with the existing techniques as it detects the fine fiber like veins present in leaves much more efficiently.",
"title": ""
},
{
"docid": "b941dc9133a12aad0a75d41112e91aa8",
"text": "Recurrent neural network grammars (RNNG) are a recently proposed probabilistic generative modeling family for natural language. They show state-ofthe-art language modeling and parsing performance. We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection. We find that explicit modeling of composition is crucial for achieving the best performance. Through the attention mechanism, we find that headedness plays a central role in phrasal representation (with the model’s latent attention largely agreeing with predictions made by hand-crafted head rules, albeit with some important differences). By training grammars without nonterminal labels, we find that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.",
"title": ""
},
{
"docid": "a31ac47cd08fe2ede7192c1ca572076b",
"text": "Pipes are present in most of the infrastructure around us - in refineries, chemical plants, power plants, not to mention sewer, gas and water distribution networks. Inspection of these pipes is extremely important, as failures may result in catastrophic accidents with loss of lives. However, inspection of small pipes (from 3 to 6 inches) is usually neglected or performed only partially due to the lack of satisfactory tools. This paper introduces a new series of robots named PipeTron, developed especially for inspection of pipes in refineries and power plants. The mobility concept and design of each version will be described, follower by results of field deployment and considerations for future improvements.",
"title": ""
},
{
"docid": "c3a3f4128d4268f174f278be4039f7b0",
"text": "Suicide pacts are uncommon and mainly committed by male-female pairs in a consortial relationship. The victims frequently choose methods such as hanging, poisoning, using a firearm, etc; however, a case of a suicide pact by drowning is rare in forensic literature. We report a case where a male and a female, both young adults, in a relationship of adopted \"brother of convenience\" were found drowned in a river. The victims were bound together at their wrists which helped with our conclusion this was a suicide pact. The medico-legal importance of wrist binding in drowning cases is also discussed in this article.",
"title": ""
},
{
"docid": "a1ed789387713c1351b737f28b4c4eb0",
"text": "Matching local geometric features on real-world depth images is a challenging task due to the noisy, low-resolution, and incomplete nature of 3D scan data. These difficulties limit the performance of current state-of-art methods, which are typically based on histograms over geometric properties. In this paper, we present 3DMatch, a data-driven model that learns a local volumetric patch descriptor for establishing correspondences between partial 3D data. To amass training data for our model, we propose a self-supervised feature learning method that leverages the millions of correspondence labels found in existing RGB-D reconstructions. Experiments show that our descriptor is not only able to match local geometry in new scenes for reconstruction, but also generalize to different tasks and spatial scales (e.g. instance-level object model alignment for the Amazon Picking Challenge, and mesh surface correspondence). Results show that 3DMatch consistently outperforms other state-of-the-art approaches by a significant margin. Code, data, benchmarks, and pre-trained models are available online at http://3dmatch.cs.princeton.edu.",
"title": ""
},
{
"docid": "858651d38d25df7f3c9a5e497b5c3dce",
"text": "Identification and recognition of the cephalic vein in the deltopectoral triangle is of critical importance when considering emergency catheterization procedures. The aim of our study was to conduct a cadaveric study to access data regarding the topography and the distribution patterns of the cephalic vein as it relates to the deltopectoral triangle. One hundred formalin fixed cadavers were examined. The cephalic vein was found in 95% (190 right and left) specimens, while in the remaining 5% (10) the cephalic vein was absent. In 80% (152) of cases the cephalic vein was found emerging superficially in the lateral portion of the deltopectoral triangle. In 30% (52) of these 152 cases the cephalic vein received one tributary within the deltopectoral triangle, while in 70% (100) of the specimens it received two. In the remaining 20% (38) of cases the cephalic vein was located deep to the deltopectoral fascia and fat and did not emerge through the deltopectoral triangle but was identified medially to the coracobrachialis and inferior to the medial border of the deltoid. In addition, in 4 (0.2%) of the specimens the cephalic vein, after crossing the deltopectoral triangle, ascended anterior and superior to the clavicle to drain into the subclavian vein. In these specimens a collateral branch was observed to communicate between the cephalic and external jugular veins. In 65.2% (124) of the cases the cephalic vein traveled with the deltoid branch of the thoracoacromial trunk. The length of the cephalic vein within the deltopectoral triangle ranged from 3.5 cm to 8.2 cm with a mean of 4.8+/-0.7 cm. The morphometric analysis revealed a mean cephalic vein diameter of 0.8+/-0.1 cm with a range of 0.1 cm to 1.2 cm. The cephalic vein is relatively large and constant, usually allowing for easy cannulation.",
"title": ""
},
{
"docid": "3a301b11b704e34af05c9072d8353696",
"text": "Attention-deficit hyperactivity disorder (ADHD) is typically characterized as a disorder of inattention and hyperactivity/impulsivity but there is increasing evidence of deficits in motivation. Using positron emission tomography (PET), we showed decreased function in the brain dopamine reward pathway in adults with ADHD, which, we hypothesized, could underlie the motivation deficits in this disorder. To evaluate this hypothesis, we performed secondary analyses to assess the correlation between the PET measures of dopamine D2/D3 receptor and dopamine transporter availability (obtained with [11C]raclopride and [11C]cocaine, respectively) in the dopamine reward pathway (midbrain and nucleus accumbens) and a surrogate measure of trait motivation (assessed using the Achievement scale on the Multidimensional Personality Questionnaire or MPQ) in 45 ADHD participants and 41 controls. The Achievement scale was lower in ADHD participants than in controls (11±5 vs 14±3, P<0.001) and was significantly correlated with D2/D3 receptors (accumbens: r=0.39, P<0.008; midbrain: r=0.41, P<0.005) and transporters (accumbens: r=0.35, P<0.02) in ADHD participants, but not in controls. ADHD participants also had lower values in the Constraint factor and higher values in the Negative Emotionality factor of the MPQ but did not differ in the Positive Emotionality factor—and none of these were correlated with the dopamine measures. In ADHD participants, scores in the Achievement scale were also negatively correlated with symptoms of inattention (CAARS A, E and SWAN I). These findings provide evidence that disruption of the dopamine reward pathway is associated with motivation deficits in ADHD adults, which may contribute to attention deficits and supports the use of therapeutic interventions to enhance motivation in ADHD.",
"title": ""
},
{
"docid": "cfc3d8ee024928151edb5ee2a1d28c13",
"text": "Objective: In this paper, we present a systematic literature review of motivation in Software Engineering. The objective of this review is to plot the landscape of current reported knowledge in terms of what motivates developers, what de-motivates them and how existing models address motivation. Methods: We perform a systematic literature review of peer reviewed published studies that focus on motivation in Software Engineering. Systematic reviews are well established in medical research and are used to systematically analyse the literature addressing specific research questions. Results: We found 92 papers related to motivation in Software Engineering. Fifty-six percent of the studies reported that Software Engineers are distinguishable from other occupational groups. Our findings suggest that Software Engineers are likely to be motivated according to three related factors: their ‘characteristics’ (for example, their need for variety); internal ‘controls’ (for example, their personality) and external ‘moderators’ (for example, their career stage). The literature indicates that de-motivated engineers may leave the organisation or take more sick-leave, while motivated engineers will increase their productivity and remain longer in the organisation. Aspects of the job that motivate Software Engineers include problem solving, working to benefit others and technical challenge. Our key finding is that the published models of motivation in Software Engineering are disparate and do not reflect the complex needs of Software Engineers in their career stages, cultural and environmental settings. Conclusions: The literature on motivation in Software Engineering presents a conflicting and partial picture of the area. It is clear that motivation is context dependent and varies from one engineer to another. The most commonly cited motivator is the job itself, yet we found very little work on what it is about that job that Software Engineers find motivating. Furthermore, surveys are often aimed at how Software Engineers feel about ‘the organisation’, rather than ‘the profession’. Although models of motivation in Software Engineering are reported in the literature, they do not account for the changing roles and environment in which Software Engineers operate. Overall, our findings indicate that there is no clear understanding of the Software Engineers’ job, what motivates Software Engineers, how they are motivated, or the outcome and benefits of motivating Software Engineers. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5eb1aa594c3c6210f029b5bbf6acc599",
"text": "Intestinal nematodes affecting dogs, i.e. roundworms, hookworms and whipworms, have a relevant health-risk impact for animals and, for most of them, for human beings. Both dogs and humans are typically infected by ingesting infective stages, (i.e. larvated eggs or larvae) present in the environment. The existence of a high rate of soil and grass contamination with infective parasitic elements has been demonstrated worldwide in leisure, recreational, public and urban areas, i.e. parks, green areas, bicycle paths, city squares, playgrounds, sandpits, beaches. This review discusses the epidemiological and sanitary importance of faecal pollution with canine intestinal parasites in urban environments and the integrated approaches useful to minimize the risk of infection in different settings.",
"title": ""
},
{
"docid": "a1fed0bcce198ad333b45bfc5e0efa12",
"text": "Contemporary games are making significant strides towards offering complex, immersive experiences for players. We can now explore sprawling 3D virtual environments populated by beautifully rendered characters and objects with autonomous behavior, engage in highly visceral action-oriented experiences offering a variety of missions with multiple solutions, and interact in ever-expanding online worlds teeming with physically customizable player avatars.",
"title": ""
},
{
"docid": "473968c14db4b189af126936fd5486ca",
"text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.",
"title": ""
},
{
"docid": "ef925e9d448cf4ca9a889b5634b685cf",
"text": "This paper proposes an ameliorated wheel-based cable inspection robot, which is able to climb up a vertical cylindrical cable on the cable-stayed bridge. The newly-designed robot in this paper is composed of two equally spaced modules, which are joined by connecting bars to form a closed hexagonal body to clasp on the cable. Another amelioration is the newly-designed electric circuit, which is employed to limit the descending speed of the robot during its sliding down along the cable. For the safe landing in case of electricity broken-down, a gas damper with a slider-crank mechanism is introduced to exhaust the energy generated by the gravity when the robot is slipping down. For the present design, with payloads below 3.5 kg, the robot can climb up a cable with diameters varying from 65 mm to 205 mm. The landing system is tested experimentally and a simplified mathematical model is analyzed. Several climbing experiments performed on real cables show the capability of the proposed robot.",
"title": ""
},
{
"docid": "922cc239f2511801da980620aa87ee94",
"text": "Alloying is an effective way to engineer the band-gap structure of two-dimensional transition-metal dichalcogenide materials. Molybdenum and tungsten ditelluride alloyed with sulfur or selenium layers (MX2xTe2(1-x), M = Mo, W and X = S, Se) have a large band-gap tunability from metallic to semiconducting due to the 2H-to-1T' phase transition as controlled by the alloy concentrations, whereas the alloy atom distribution in these two phases remains elusive. Here, combining atomic resolution Z-contrast scanning transmission electron microscopy imaging and density functional theory (DFT), we discovered that anisotropic ordering occurs in the 1T' phase, in sharp contrast to the isotropic alloy behavior in the 2H phase under similar alloy concentration. The anisotropic ordering is presumably due to the anisotropic bonding in the 1T' phase, as further elaborated by DFT calculations. Our results reveal the atomic anisotropic alloyed behavior in 1T' phase layered alloys regardless of their alloy concentration, shining light on fine-tuning their physical properties via engineering the alloyed atomic structure.",
"title": ""
}
] |
scidocsrr
|
3deb1e177be03258d46216481d401d2a
|
Sentiment Analysis of Yelp ‘ s Ratings Based on Text
|
[
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
}
] |
[
{
"docid": "1313fbdd0721b58936a05da5080239df",
"text": "Bug tracking systems are valuable assets for managing maintenance activities. They are widely used in open-source projects as well as in the software industry. They collect many different kinds of issues: requests for defect fixing, enhancements, refactoring/restructuring activities and organizational issues. These different kinds of issues are simply labeled as \"bug\" for lack of a better classification support or of knowledge about the possible kinds.\n This paper investigates whether the text of the issues posted in bug tracking systems is enough to classify them into corrective maintenance and other kinds of activities.\n We show that alternating decision trees, naive Bayes classifiers, and logistic regression can be used to accurately distinguish bugs from other kinds of issues. Results from empirical studies performed on issues for Mozilla, Eclipse, and JBoss indicate that issues can be classified with between 77% and 82% of correct decisions.",
"title": ""
},
{
"docid": "7456af2a110a0f05b39d7d72e64ab553",
"text": "Initially mobile phones were developed only for voice communication but now days the scenario has changed, voice communication is just one aspect of a mobile phone. There are other aspects which are major focus of interest. Two such major factors are web browser and GPS services. Both of these functionalities are already implemented but are only in the hands of manufacturers not in the hands of users because of proprietary issues, the system does not allow the user to access the mobile hardware directly. But now, after the release of android based open source mobile phone a user can access the hardware directly and design customized native applications to develop Web and GPS enabled services and can program the other hardware components like camera etc. In this paper we will discuss the facilities available in android platform for implementing LBS services (geo-services).",
"title": ""
},
{
"docid": "41b83a85c1c633785766e3f464cbd7a6",
"text": "Distributed systems are easier to build than ever with the emergence of new, data-centric abstractions for storing and computing over massive datasets. However, similar abstractions do not exist for storing and accessing meta-data. To fill this gap, Tango provides developers with the abstraction of a replicated, in-memory data structure (such as a map or a tree) backed by a shared log. Tango objects are easy to build and use, replicating state via simple append and read operations on the shared log instead of complex distributed protocols; in the process, they obtain properties such as linearizability, persistence and high availability from the shared log. Tango also leverages the shared log to enable fast transactions across different objects, allowing applications to partition state across machines and scale to the limits of the underlying log without sacrificing consistency.",
"title": ""
},
{
"docid": "a73917d842c18ed9c36a13fe9187ea4c",
"text": "Brain Magnetic Resonance Image (MRI) plays a non-substitutive role in clinical diagnosis. The symptom of many diseases corresponds to the structural variants of brain. Automatic structure segmentation in brain MRI is of great importance in modern medical research. Some methods were developed for automatic segmenting of brain MRI but failed to achieve desired accuracy. In this paper, we proposed a new patch-based approach for automatic segmentation of brain MRI using convolutional neural network (CNN). Each brain MRI acquired from a small portion of public dataset is firstly divided into patches. All of these patches are then used for training CNN, which is used for automatic segmentation of brain MRI. Experimental results showed that our approach achieved better segmentation accuracy compared with other deep learning methods.",
"title": ""
},
{
"docid": "071b46c04389b6fe3830989a31991d0d",
"text": "Direct slicing of CAD models to generate process planning instructions for solid freeform fabrication may overcome inherent disadvantages of using stereolithography format in terms of the process accuracy, ease of file management, and incorporation of multiple materials. This paper will present the results of our development of a direct slicing algorithm for layered freeform fabrication. The direct slicing algorithm was based on a neutral, international standard (ISO 10303) STEP-formatted non-uniform rational B-spline (NURBS) geometric representation and is intended to be independent of any commercial CAD software. The following aspects of the development effort will be presented: (1) determination of optimal build direction based upon STEP-based NURBS models; (2) adaptive subdivision of NURBS data for geometric refinement; and (3) ray-casting slice generation into sets of raster patterns. The development also provides for multi-material slicing and will provide an effective tool in heterogeneous slicing processes. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "619b39299531f126769aa96b3e0e84e1",
"text": "In this paper, we focus on the opinion target extraction as part of the opinion mining task. We model the problem as an information extraction task, which we address based on Conditional Random Fields (CRF). As a baseline we employ the supervised algorithm by Zhuang et al. (2006), which represents the state-of-the-art on the employed data. We evaluate the algorithms comprehensively on datasets from four different domains annotated with individual opinion target instances on a sentence level. Furthermore, we investigate the performance of our CRF-based approach and the baseline in a singleand cross-domain opinion target extraction setting. Our CRF-based approach improves the performance by 0.077, 0.126, 0.071 and 0.178 regarding F-Measure in the single-domain extraction in the four domains. In the crossdomain setting our approach improves the performance by 0.409, 0.242, 0.294 and 0.343 regarding F-Measure over the baseline.",
"title": ""
},
{
"docid": "2f8361f2943ff90bf98c6b8a207086c4",
"text": "Real-life bugs are successful because of their unfailing ability to adapt. In particular this applies to their ability to adapt to strategies that are meant to eradicate them as a species. Software bugs have some of these same traits. We will discuss these traits, and consider what we can do about them.",
"title": ""
},
{
"docid": "2d8f76cef3d0c11441bbc8f5487588cb",
"text": "Abstract. It seems natural to assume that the more It seems natural to assume that the more closely robots come to resemble people, the more likely they are to elicit the kinds of responses people direct toward each other. However, subtle flaws in appearance and movement only seem eerie in very humanlike robots. This uncanny phenomenon may be symptomatic of entities that elicit a model of a human other but do not measure up to it. If so, a very humanlike robot may provide the best means of finding out what kinds of behavior are perceived as human, since deviations from a human other are more obvious. In pursuing this line of inquiry, it is essential to identify the mechanisms involved in evaluations of human likeness. One hypothesis is that an uncanny robot elicits an innate fear of death and culturally-supported defenses for coping with death’s inevitability. An experiment, which borrows from the methods of terror management research, was performed to test this hypothesis. Across all questions subjects who were exposed to a still image of an uncanny humanlike robot had on average a heightened preference for worldview supporters and a diminished preference for worldview threats relative to the control group.",
"title": ""
},
{
"docid": "48560dec9177dd68e6a2827395370a4e",
"text": "We present Segment-level Neural CRF, which combines neural networks with a linear chain CRF for segment-level sequence modeling tasks such as named entity recognition (NER) and syntactic chunking. Our segment-level CRF can consider higher-order label dependencies compared with conventional word-level CRF. Since it is difficult to consider all possible variable length segments, our method uses segment lattice constructed from the word-level tagging model to reduce the search space. Performing experiments on NER and chunking, we demonstrate that our method outperforms conventional word-level CRF with neural networks.",
"title": ""
},
{
"docid": "4d1eae0f247f1c2db9e3c544a65c041f",
"text": "This papers presents a new system using circular markers to estimate the pose of a camera. Contrary to most markersbased systems using square markers, we advocate the use of circular markers, as we believe that they are easier to detect and provide a pose estimate that is more robust to noise. Unlike existing systems using circular markers, our method computes the exact pose from one single circular marker, and do not need specific points being explicitly shown on the marker (like center, or axes orientation). Indeed, the center and orientation is encoded directly in the marker’s code. We can thus use the entire marker surface for the code design. After solving the back projection problem for one conic correspondence, we end up with two possible poses. We show how to find the marker’s code, rotation and final pose in one single step, by using a pyramidal cross-correlation optimizer. The marker tracker runs at 100 frames/second on a desktop PC and 30 frames/second on a hand-held UMPC.",
"title": ""
},
{
"docid": "51e3a023053b628de30adeb0730d3832",
"text": "The frequency characteristics of subpixel-based decimation with RGB vertical stripe and RGBX square-shaped subpixel arrangements are studied. To achieve higher apparent resolution than pixel-based decimation, the sampling locations are specially chosen for each of two subpixel arrangements, resulting in relatively small magnitudes of horizontal and vertical aliasing spectra in frequency domain. Thanks to 2-D RGBX square-shaped subpixel arrangement, all the horizontal, vertical, diagonal and anti-diagonal aliasing spectra merely contain low-frequency information, indicating that subpixel-based decimation with RGBX square-shaped panel is more effective in retaining original high frequency details than RGB vertical stripe subpixel arrangement.",
"title": ""
},
{
"docid": "6a6191695c948200658ad6020f21f203",
"text": "Given a random pair of images, an arbitrary style transfer method extracts the feel from the reference image to synthesize an output based on the look of the other content image. Recent arbitrary style transfer methods transfer second order statistics from reference image onto content image via a multiplication between content image features and a transformation matrix, which is computed from features with a pre-determined algorithm. These algorithms either require computationally expensive operations, or fail to model the feature covariance and produce artifacts in synthesized images. Generalized from these methods, in this work, we derive the form of transformation matrix theoretically and present an arbitrary style transfer approach that learns the transformation matrix with a feed-forward network. Our algorithm is highly efficient yet allows a flexible combination of multi-level styles while preserving content affinity during style transfer process. We demonstrate the effectiveness of our approach on four tasks: artistic style transfer, video and photo-realistic style transfer as well as domain adaptation, including comparisons with the stateof-the-art methods.",
"title": ""
},
{
"docid": "3df76261ff7981794e9c3d1332efe023",
"text": "The complete sequence of the 16,569-base pair human mitochondrial genome is presented. The genes for the 12S and 16S rRNAs, 22 tRNAs, cytochrome c oxidase subunits I, II and III, ATPase subunit 6, cytochrome b and eight other predicted protein coding genes have been located. The sequence shows extreme economy in that the genes have none or only a few noncoding bases between them, and in many cases the termination codons are not coded in the DNA but are created post-transcriptionally by polyadenylation of the mRNAs.",
"title": ""
},
{
"docid": "4cc52c8b6065d66472955dff9200b71f",
"text": "Over the past few years there has been an increasing focus on the development of features for resource management within the Linux kernel. The addition of the fair group scheduler has enabled the provisioning of proportional CPU time through the specification of group weights. Since the scheduler is inherently workconserving in nature, a task or a group can consume excess CPU share in an otherwise idle system. There are many scenarios where this extra CPU share can cause unacceptable utilization or latency. CPU bandwidth provisioning or limiting approaches this problem by providing an explicit upper bound on usage in addition to the lower bound already provided by shares. There are many enterprise scenarios where this functionality is useful. In particular are the cases of payper-use environments, and latency provisioning within non-homogeneous environments. This paper details the requirements behind this feature, the challenges involved in incorporating into CFS (Completely Fair Scheduler), and the future development road map for this feature. 1 CPU as a manageable resource Before considering the aspect of bandwidth provisioning let us first review some of the basic existing concepts currently arbitrating entity management within the scheduler. There are two major scheduling classes within the Linux CPU scheduler, SCHED_RT and SCHED_NORMAL. When runnable, entities from the former, the real-time scheduling class, will always be elected to run over those from the normal scheduling class. Prior to v2.6.24, the scheduler had no notion of any entity larger than that of single task1. The available management APIs reflected this and the primary control of bandwidth available was nice(2). In v2.6.24, the completely fair scheduler (CFS) was merged, replacing the existing SCHED_NORMAL scheduling class. This new design delivered weight based scheduling of CPU bandwidth, enabling arbitrary partitioning. This allowed support for group scheduling to be added, managed using cgroups through the CPU controller sub-system. This support allows for the flexible creation of scheduling groups, allowing the fraction of CPU resources received by a group of tasks to be arbitrated as a whole. The addition of this support has been a major step in scheduler development, enabling Linux to align more closely with enterprise requirements for managing this resouce. The hierarchies supported by this model are flexible, and groups may be nested within groups. Each group entity’s bandwidth is provisioned using a corresponding shares attribute which defines its weight. Similarly, the nice(2) API was subsumed to control the weight of an individual task entity. Figure 1 shows the hierarchical groups that might be created in a typical university server to differentiate CPU bandwidth between users such as professors, students, and different departments. One way to think about shares is that it provides lowerbound provisioning. When CPU bandwidth is scheduled at capacity, all runnable entities will receive bandwidth in accordance with the ratio of their share weight. It’s key to observe here that not all entities may be runnable 1Recall that under Linux any kernel-backed thread is considered individual task entity, there is no typical notion of a process in scheduling context.",
"title": ""
},
{
"docid": "c2802496761276ddc99949f8c5667bbc",
"text": "A long-term goal of AI is to produce agents that can learn a diversity of skills throughout their lifetimes and continuously improve those skills via experience. A longstanding obstacle towards that goal is catastrophic forgetting, which is when learning new information erases previously learned information. Catastrophic forgetting occurs in artificial neural networks (ANNs), which have fueled most recent advances in AI. A recent paper proposed that catastrophic forgetting in ANNs can be reduced by promoting modularity, which can limit forgetting by isolating task information to specific clusters of nodes and connections (functional modules). While the prior work did show that modular ANNs suffered less from catastrophic forgetting, it was not able to produce ANNs that possessed task-specific functional modules, thereby leaving the main theory regarding modularity and forgetting untested. We introduce diffusion-based neuromodulation, which simulates the release of diffusing, neuromodulatory chemicals within an ANN that can modulate (i.e. up or down regulate) learning in a spatial region. On the simple diagnostic problem from the prior work, diffusion-based neuromodulation 1) induces task-specific learning in groups of nodes and connections (task-specific localized learning), which 2) produces functional modules for each subtask, and 3) yields higher performance by eliminating catastrophic forgetting. Overall, our results suggest that diffusion-based neuromodulation promotes task-specific localized learning and functional modularity, which can help solve the challenging, but important problem of catastrophic forgetting.",
"title": ""
},
{
"docid": "eebf03df49eb4a99f61d371e059ef43e",
"text": "In theoretical cognitive science, there is a tension between highly structured models whose parameters have a direct psychological interpretation and highly complex, general-purpose models whose parameters and representations are difficult to interpret. The former typically provide more insight into cognition but the latter often perform better. This tension has recently surfaced in the realm of educational data mining, where a deep learning approach to estimating student proficiency, termed deep knowledge tracing or DKT [17], has demonstrated a stunning performance advantage over the mainstay of the field, Bayesian knowledge tracing or BKT [3].",
"title": ""
},
{
"docid": "0cb2c9d4f7c54450bddd84eed70ed403",
"text": "The well-known Mori-Zwanzig theory tells us that model reduction leads to memory effect. For a long time, modeling the memory effect accurately and efficiently has been an important but nearly impossible task in developing a good reduced model. In this work, we explore a natural analogy between recurrent neural networks and the Mori-Zwanzig formalism to establish a systematic approach for developing reduced models with memory. Two training models-a direct training model and a dynamically coupled training model-are proposed and compared. We apply these methods to the Kuramoto-Sivashinsky equation and the Navier-Stokes equation. Numerical experiments show that the proposed method can produce reduced model with good performance on both short-term prediction and long-term statistical properties. In science and engineering, many high-dimensional dynamical systems are too complicated to solve in detail. Nor is it necessary since usually we are only interested in a small subset of the variables representing the gross behavior of the system. Therefore, it is useful to develop reduced models which can approximate the variables of interest without solving the full system. This is the celebrated model reduction problem. Even though model reduction has been widely explored in many fields, to this day there is still a lack of systematic and reliable methodologies for model reduction. One has to rely on uncontrolled approximations in order to move things forward. On the other hand, there is in principle a rather solid starting point, the Mori-Zwanzig (M-Z) theory, for performing model reduction [1], [2]. In M-Z, the effect of unresolved variables on resolved ones is represented as a memory and a noise term, giving rise to the so-called generalized Langevin equation (GLE). Solving the GLE accurately is almost equivalent to solving the full system, because the memory kernel and noise terms contain the full information for the unresolved variables. This means that the M-Z theory does not directly lead to a reduction of complexity or the computational cost. However, it does provide a starting point for making approximations. In this regard, we mention in particular the t-model proposed by Chorin et al [3]. In [4] reduced models of the viscous Burgers equation and 3-dimensional Navier-Stokes equation were developed by analytically approximating the memory kernel in the GLE using the trapezoidal integration scheme. Li and E [5] developed approximate boundary conditions for molecular dynamics using linear approximation of the M-Z formalism. In [6], auxiliary variables are used to deal with the non-Markovian dynamics of the GLE. Despite all of these efforts, it is fair to say that there is still a lack of systematic and reliable procedure for approximating the GLE. In fact, dealing with the memory terms explicitly does not seem to be a promising approach for deriving systematic and reliable approximations to the GLE. ∗The Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544, USA †Department of Mechanics and Aerospace Engineering, Southern University of Science and Technology, Shenzhen 518055, Peoples Republic of China ‡Beijing Institute of Big Data Research, Beijing, 100871, P.R. China 1 ar X iv :1 80 8. 04 25 8v 1 [ cs .L G ] 1 0 A ug 2 01 8 One of the most successful approaches for representing memory effects has been the recurrent neural networks (RNN) in machine learning. Indeed there is a natural analogy between RNN and M-Z. The hidden states in RNN can be viewed as a reduced representation of the unresolved variables in M-Z. We can then view RNN as a way of performing dimension reduction in the space of the unresolved variables. In this paper, we explore the possibility of performing model reduction using RNNs. We will limit ourselves to the situation when the original model is in the form of a conservative partial differential equation (PDE), the reduced model is an averaged version of the original PDE. The crux of the matter is then the accurate representation of the unresolved flux term. We propose two kinds of models. In the first kind, the unresolved flux terms in the equation are learned from data. This flux model is then used in the averaged equation to form the reduced model. We call this the direct training model. A second approach, which we call the coupled training model, is to train the neural network together with the averaged equation. From the viewpoint of machine learning, the objective in the direct training model is to fit the unresolved flux. The objective in the coupled training model is to fit the resolved variables (the averaged quantities). For application, we focus on the Kuramoto-Sivashinsky (K-S) equation and the Navier-Stokes (N-S) equation. The K-S equation writes as ∂u ∂t + 1 2 ∂u ∂x + ∂u ∂x2 + ∂u ∂x4 = 0, x ∈ R, t > 0; (1) u(x, t) = u(x+ L, t), u(x, 0) = g(x). (2) We are interested in a low-pass filtered solution of the K-S equation, ū, and want to develop a reduced system for ū. In general, ū can be written as the convolution of u with a low pass filter G(y):",
"title": ""
},
{
"docid": "e48f641ad2ca9a61611b48e1a6f82a52",
"text": "We present a methodology to design cavity-excited omega-bianisotropic metasurface (O-BMS) antennas capable of producing arbitrary radiation patterns, prescribed by antenna array theory. The method relies on previous work, in which we proved that utilizing the three O-BMS degrees of freedom, namely, electric and magnetic polarizabilities, and magnetoelectric coupling, any field transformation that obeys local power conservation can be implemented via passive lossless components. When the O-BMS acts as the top cover of a metallic cavity excited by a point source, this property allows optimization of the metasurface modal reflection coefficients to establish any desirable power profile on the aperture. Matching in this way the excitation profile to the target power profile corresponding to the desirable aperture fields allows emulation of arbitrary discrete antenna array radiation patterns. The resultant low-profile probed-fed cavity-excited O-BMS antennas offer a new means for meticulous pattern control, without requiring complex, expensive, and often lossy, feed networks.",
"title": ""
},
{
"docid": "7edfde7d7875d88702db2aabc4ac2883",
"text": "This paper proposes a novel approach to build integer multiplication circuits based on speculation, a technique which performs a faster-but occasionally wrong-operation resorting to a multi-cycle error correction circuit only in the rare case of error. The proposed speculative multiplier uses a novel speculative carry-save reduction tree using three steps: partial products recoding, partial products partitioning, speculative compression. The speculative tree uses speculative (m:2) counters, with m > 3, that are faster than a conventional tree using full-adders and half-adders. A technique to automatically choose the suitable speculative counters, taking into accounts both error probability and delay, is also presented in the paper. The speculative tree is completed with a fast speculative carry-propagate adder and an error correction circuit. We have synthesized speculative multipliers for several operand lengths using the UMC 65 nm library. Comparisons with conventional multipliers show that speculation is effective when high speed is required. Speculative multipliers allow reaching a higher speed compared with conventional counterparts and are also quite effective in terms of power dissipation, when a high speed operation is required.",
"title": ""
}
] |
scidocsrr
|
e3743032e23258c4b1874b76ac169833
|
Cloud computing for Internet of Things & sensing based applications
|
[
{
"docid": "00614d23a028fe88c3f33db7ace25a58",
"text": "Cloud Computing and The Internet of Things are the two hot points in the Internet field. The application of the two new technologies is in hot discussion and research, but quite less on the field of agriculture and forestry. Thus, in this paper, we analyze the study and application of Cloud Computing and The Internet of Things on agriculture and forestry. Then we put forward an idea that making a combination of the two techniques and analyze the feasibility, applications and future prospect of the combination.",
"title": ""
}
] |
[
{
"docid": "9490ca6447448c0aba919871b1fa9791",
"text": "The study's goal was to examine the socially responsible power use in the context of ethical leadership as an explanatory mechanism of the ethical leadership-follower outcomes link. Drawing on the attachment theory (Bowlby, 1969/1982), we explored a power-based process model, which assumes that a leader's personal power is an intervening variable in the relationship between ethical leadership and follower outcomes, while incorporating the moderating role of followers' moral identity in this transformation process. The results of a two-wave field study (N = 235) that surveyed employees and a scenario experiment (N = 169) fully supported the proposed (moderated) mediation models, as personal power mediated the positive relationship between ethical leadership and a broad range of tested follower outcomes (i.e., leader effectiveness, follower extra effort, organizational commitment, job satisfaction, and work engagement), as well as the interactive effects of ethical leadership and follower moral identity on these follower outcomes. Theoretical and practical implications are discussed.",
"title": ""
},
{
"docid": "e9a154af3a041cadc5986b7369ce841b",
"text": "Metrological characterization of high-performance ΔΣ Analog-to-Digital Converters (ADCs) poses severe challenges to reference instrumentation and standard methods. In this paper, most important tests related to noise and effective resolution, nonlinearity, environmental uncertainty, and stability are proved and validated in the specific case of a high-performance ΔΣ ADC. In particular, tests setups are proposed and discussed and the definitions used to assess the performance are clearly stated in order to identify procedures and guidelines for high-resolution ADCs characterization. An experimental case study of the high-performance ΔΣ ADC DS-22 developed at CERN is reported and discussed by presenting effective alternative test setups. Experimental results show that common characterization methods by the IEEE standards 1241 [1] and 1057 [2] cannot be used and alternative strategies turn out to be effective.",
"title": ""
},
{
"docid": "012bcbc6b5e7b8aaafd03f100489961c",
"text": "DNA is an attractive medium to store digital information. Here we report a storage strategy, called DNA Fountain, that is highly robust and approaches the information capacity per nucleotide. Using our approach, we stored a full computer operating system, movie, and other files with a total of 2.14 × 106 bytes in DNA oligonucleotides and perfectly retrieved the information from a sequencing coverage equivalent to a single tile of Illumina sequencing. We also tested a process that can allow 2.18 × 1015 retrievals using the original DNA sample and were able to perfectly decode the data. Finally, we explored the limit of our architecture in terms of bytes per molecule and obtained a perfect retrieval from a density of 215 petabytes per gram of DNA, orders of magnitude higher than previous reports.",
"title": ""
},
{
"docid": "65dd0e6e143624c644043507cf9465a7",
"text": "Let G \" be a non-directed graph having n vertices, without parallel edges and slings. Let the vertices of Gn be denoted by F 1 ,. . ., Pn. Let v(P j) denote the valency of the point P i and put (0. 1) V(G,) = max v(Pj). 1ninn Let E(G.) denote the number of edges of Gn. Let H d (n, k) denote the set of all graphs Gn for which V (G n) = k and the diameter D (Gn) of which is-d, In the present paper we shall investigate the quantity (0 .2) Thus we want to determine the minimal number N such that there exists a graph having n vertices, N edges and diameter-d and the maximum of the valencies of the vertices of the graph is equal to k. To help the understanding of the problem let us consider the following interpretation. Let be given in a country n airports ; suppose we want to plan a network of direct flights between these airports so that the maximal number of airports to which a given airport can be connected by a direct flight should be equal to k (i .e. the maximum of the capacities of the airports is prescribed), further it should be possible to fly from every airport to any other by changing the plane at most d-1 times ; what is the minimal number of flights by which such a plan can be realized? For instance, if n = 7, k = 3, d= 2 we have F2 (7, 3) = 9 and the extremal graph is shown by Fig. 1. The problem of determining Fd (n, k) has been proposed and discussed recently by two of the authors (see [1]). In § 1 we give a short summary of the results of the paper [1], while in § 2 and 3 we give some new results which go beyond those of [1]. Incidentally we solve a long-standing problem about the maximal number of edges of a graph not containing a cycle of length 4. In § 4 we mention some unsolved problems. Let us mention that our problem can be formulated also in terms of 0-1 matrices as follows : Let M=(a il) be a symmetrical n by n zero-one matrix such 2",
"title": ""
},
{
"docid": "c3ee2beee84cd32e543c4b634062eeac",
"text": "In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.",
"title": ""
},
{
"docid": "9dd8ab91929e3c4e7ddd90919eb79d22",
"text": "–Graphs are currently becoming more important in modeling and demonstrating information. In the recent years, graph mining is becoming an interesting field for various processes such as chemical compounds, protein structures, social networks and computer networks. One of the most important concepts in graph mining is to find frequent subgraphs. The major advantage of utilizing subgraphs is speeding up the search for similarities, finding graph specifications and graph classifications. In this article we classify the main algorithms in the graph mining field. Some fundamental algorithms are reviewed and categorized. Some issues for any algorithm are graph representation, search strategy, nature of input and completeness of output that are discussed in this article. Keywords––Frequent subgraph, Graph mining, Graph mining algorithms",
"title": ""
},
{
"docid": "dff0752eace9db08e25904a844533338",
"text": "The authors investigated whether accuracy in identifying deception from demeanor in high-stake lies is specific to those lies or generalizes to other high-stake lies. In Experiment 1, 48 observers judged whether 2 different groups of men were telling lies about a mock theft (crime scenario) or about their opinion (opinion scenario). The authors found that observers' accuracy in judging deception in the crime scenario was positively correlated with their accuracy in judging deception in the opinion scenario. Experiment 2 replicated the results of Experiment 1, as well as P. Ekman and M. O'Sullivan's (1991) finding of a positive correlation between the ability to detect deceit and the ability to identify micromomentary facial expressions of emotion. These results show that the ability to detect high-stake lies generalizes across high-stake situations and is most likely due to the presence of emotional clues that betray deception in high-stake lies.",
"title": ""
},
{
"docid": "88615ac1788bba148f547ca52bffc473",
"text": "This paper describes a probabilistic framework for faithful reproduction of dynamic facial expressions on a synthetic face model with MPEG-4 facial animation parameters (FAPs) while achieving very low bitrate in data transmission. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the FAPs and facial action coding system (FACS) into a dynamic Bayesian network (DBN) to account for uncertainties in FAP extraction and to model the dynamic evolution of facial expressions. At the synthesizer, a static BN reconstructs the FAPs and their intensity. The two BNs are connected statically through a data stream link. Using the coupled BN to analyze and synthesize the dynamic facial expressions is the major novelty of this work. The novelty brings about several benefits. First, very low bitrate (9 bytes per frame) in data transmission can be achieved. Second, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetected FAPs. Third, more realistic looking facial expressions can be reproduced by modelling the dynamics of human expressions.",
"title": ""
},
{
"docid": "b477893ecccb3aee1de3b6f12f3186ca",
"text": "Obesity is a global health problem characterized as an increase in the mass of adipose tissue. Adipogenesis is one of the key pathways that increases the mass of adipose tissue, by which preadipocytes mature into adipocytes through cell differentiation. Peroxisome proliferator-activated receptor γ (PPARγ), the chief regulator of adipogenesis, has been acutely investigated as a molecular target for natural products in the development of anti-obesity treatments. In this review, the regulation of PPARγ expression by natural products through inhibition of CCAAT/enhancer-binding protein β (C/EBPβ) and the farnesoid X receptor (FXR), increased expression of GATA-2 and GATA-3 and activation of the Wnt/β-catenin pathway were analyzed. Furthermore, the regulation of PPARγ transcriptional activity associated with natural products through the antagonism of PPARγ and activation of Sirtuin 1 (Sirt1) and AMP-activated protein kinase (AMPK) were discussed. Lastly, regulation of mitogen-activated protein kinase (MAPK) by natural products, which might regulate both PPARγ expression and PPARγ transcriptional activity, was summarized. Understanding the role natural products play, as well as the mechanisms behind their regulation of PPARγ activity is critical for future research into their therapeutic potential for fighting obesity.",
"title": ""
},
{
"docid": "e7ae72f3bb2c24259dd122bff0f5d04e",
"text": "In this paper we introduce a novel linear precoding technique. The approach used for the design of the precoding matrix is general and the resulting algorithm can address several optimization criteria with an arbitrary number of antennas at the user terminals. We have achieved this by designing the precoding matrices in two steps. In the first step we minimize the overlap of the row spaces spanned by the effective channel matrices of different users using a new cost function. In the next step, we optimize the system performance with respect to specific optimization criteria assuming a set of parallel single- user MIMO channels. By combining the closed form solution with Tomlinson-Harashima precoding we reach the maximum sum-rate capacity when the total number of antennas at the user terminals is less or equal to the number of antennas at the base station. By iterating the closed form solution with appropriate power loading we are able to extract the full diversity in the system and reach the maximum sum-rate capacity in case of high multi-user interference. Joint processing over a group of multi-user MIMO channels in different frequency and time slots yields maximum diversity regardless of the level of multi-user interference.",
"title": ""
},
{
"docid": "d9617ed486a1b5488beab08652f736e0",
"text": "The paper shows how Combinatory Categorial Grammar (CCG) can be adapted to take advantage of the extra resourcesensitivity provided by the Categorial Type Logic framework. The resulting reformulation, Multi-Modal CCG, supports lexically specified control over the applicability of combinatory rules, permitting a universal rule component and shedding the need for language-specific restrictions on rules. We discuss some of the linguistic motivation for these changes, define the Multi-Modal CCG system and demonstrate how it works on some basic examples. We furthermore outline some possible extensions and address computational aspects of Multi-Modal CCG.",
"title": ""
},
{
"docid": "869cc834f84bc88a258b2d9d9d4f3096",
"text": "Obesity is a multifactorial disease characterized by an excessive weight for height due to an enlarged fat deposition such as adipose tissue, which is attributed to a higher calorie intake than the energy expenditure. The key strategy to combat obesity is to prevent chronic positive impairments in the energy equation. However, it is often difficult to maintain energy balance, because many available foods are high-energy yielding, which is usually accompanied by low levels of physical activity. The pharmaceutical industry has invested many efforts in producing antiobesity drugs; but only a lipid digestion inhibitor obtained from an actinobacterium is currently approved and authorized in Europe for obesity treatment. This compound inhibits the activity of pancreatic lipase, which is one of the enzymes involved in fat digestion. In a similar way, hundreds of extracts are currently being isolated from plants, fungi, algae, or bacteria and screened for their potential inhibition of pancreatic lipase activity. Among them, extracts isolated from common foodstuffs such as tea, soybean, ginseng, yerba mate, peanut, apple, or grapevine have been reported. Some of them are polyphenols and saponins with an inhibitory effect on pancreatic lipase activity, which could be applied in the management of the obesity epidemic.",
"title": ""
},
{
"docid": "bb774fed5d447fdc181cb712c74925c2",
"text": "Test-driven development is a discipline that helps professional software developers ship clean, flexible code that works, on time. In this article, the author discusses how test-driven development can help software developers achieve a higher degree of professionalism",
"title": ""
},
{
"docid": "c94d01ee0aaa8a70ce4e3441850316a6",
"text": "Convolutional neural networks (CNNs) are inherently subject to invariable filters that can only aggregate local inputs with the same topological structures. It causes that CNNs are allowed to manage data with Euclidean or grid-like structures (e.g., images), not ones with non-Euclidean or graph structures (e.g., traffic networks). To broaden the reach of CNNs, we develop structure-aware convolution to eliminate the invariance, yielding a unified mechanism of dealing with both Euclidean and non-Euclidean structured data. Technically, filters in the structure-aware convolution are generalized to univariate functions, which are capable of aggregating local inputs with diverse topological structures. Since infinite parameters are required to determine a univariate function, we parameterize these filters with numbered learnable parameters in the context of the function approximation theory. By replacing the classical convolution in CNNs with the structure-aware convolution, Structure-Aware Convolutional Neural Networks (SACNNs) are readily established. Extensive experiments on eleven datasets strongly evidence that SACNNs outperform current models on various machine learning tasks, including image classification and clustering, text categorization, skeleton-based action recognition, molecular activity detection, and taxi flow prediction.",
"title": ""
},
{
"docid": "5d5014506bdf0c16b566edc8bba3b730",
"text": "This paper surveys recent literature in the domain of machine learning techniques and artificial intelligence used to predict stock market movements. Artificial Neural Networks (ANNs) are identified to be the dominant machine learning technique in stock market prediction area. Keywords— Artificial Neural Networks (ANNs); Stock Market; Prediction",
"title": ""
},
{
"docid": "8b2c83868c16536910e7665998b2d87e",
"text": "Nowadays organizations turn to any standard procedure to gain a competitive advantage. If sustainable, competitive advantage can bring about benefit to the organization. The aim of the present study was to introduce competitive advantage as well as to assess the impacts of the balanced scorecard as a means to measure the performance of organizations. The population under study included employees of organizations affiliated to the Social Security Department in North Khorasan Province, of whom a total number of 120 employees were selected as the participants in the research sample. Two researcher-made questionnaires with a 5-point Likert scale were used to measure the competitive advantage and the balanced scorecard. Besides, Cronbach's alpha coefficient was used to measure the reliability of the instruments that was equal to 0.74 and 0.79 for competitive advantage and the balanced scorecard, respectively. The data analysis was performed using the structural equation modeling and the results indicated the significant and positive impact of the implementation of the balanced scorecard on the sustainable competitive advantage. © 2015 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "50edb29954ee6cbb3e38055d7b01e99a",
"text": "Security has becoming an important issue everywhere. Home security is becoming necessary nowadays as the possibilities of intrusion are increasing day by day. Safety from theft, leaking of raw gas and fire are the most important requirements of home security system for people. A traditional home security system gives the signals in terms of alarm. However, the GSM (Global System for Mobile communications) based security systems provides enhanced security as whenever a signal from sensor occurs, a text message is sent to a desired number to take necessary actions. This paper suggests two methods for home security system. The first system uses web camera. Whenever there is a motion in front of the camera, it gives security alert in terms of sound and a mail is delivered to the owner. The second method sends SMS which uses GSMGPS Module (sim548c) and Atmega644p microcontroller, sensors, relays and buzzers.",
"title": ""
},
{
"docid": "0b79fc06afe7782e7bdcdbd96cc1c1a0",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/annals.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "92ff221950df6e7fd266926c305200cd",
"text": "The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal variables and that it can handle and discover nonlinear relationships between variables. Also, nonlinear PCA can deal with variables at their appropriate measurement level; for example, it can treat Likert-type scales ordinally instead of numerically. Every observed value of a variable can be referred to as a category. While performing PCA, nonlinear PCA converts every category to a numeric value, in accordance with the variable's analysis level, using optimal quantification. The authors discuss how optimal quantification is carried out, what analysis levels are, which decisions have to be made when applying nonlinear PCA, and how the results can be interpreted. The strengths and limitations of the method are discussed. An example applying nonlinear PCA to empirical data using the program CATPCA (J. J. Meulman, W. J. Heiser, & SPSS, 2004) is provided.",
"title": ""
},
{
"docid": "981cbb9140570a6a6f3d4f4f49cd3654",
"text": "OBJECTIVES\nThe study sought to evaluate clinical outcomes in clinical practice with rhythm control versus rate control strategy for management of atrial fibrillation (AF).\n\n\nBACKGROUND\nRandomized trials have not demonstrated significant differences in stroke, heart failure, or mortality between rhythm and rate control strategies. The comparative outcomes in contemporary clinical practice are not well described.\n\n\nMETHODS\nPatients managed with a rhythm control strategy targeting maintenance of sinus rhythm were retrospectively compared with a strategy of rate control alone in a AF registry across various U.S. practice settings. Unadjusted and adjusted (inverse-propensity weighted) outcomes were estimated.\n\n\nRESULTS\nThe overall study population (N = 6,988) had a median of 74 (65 to 81) years of age, 56% were males, 77% had first detected or paroxysmal AF, and 68% had CHADS2 score ≥2. In unadjusted analyses, rhythm control was associated with lower all-cause death, cardiovascular death, first stroke/non-central nervous system systemic embolization/transient ischemic attack, or first major bleeding event (all p < 0.05); no difference in new onset heart failure (p = 0.28); and more frequent cardiovascular hospitalizations (p = 0.0006). There was no difference in the incidence of pacemaker, defibrillator, or cardiac resynchronization device implantations (p = 0.99). In adjusted analyses, there were no statistical differences in clinical outcomes between rhythm control and rate control treated patients (all p > 0.05); however, rhythm control was associated with more cardiovascular hospitalizations (hazard ratio: 1.24; 95% confidence interval: 1.10 to 1.39; p = 0.0003).\n\n\nCONCLUSIONS\nAmong patients with AF, rhythm control was not superior to rate control strategy for outcomes of stroke, heart failure, or mortality, but was associated with more cardiovascular hospitalizations.",
"title": ""
}
] |
scidocsrr
|
17a96cf135736fb1b9037d9fcf3a5faa
|
Addressing challenges in promoting healthy lifestyles: the al-chatbot approach
|
[
{
"docid": "85feee6e5492dfa5cd95eed2684f2558",
"text": "This paper presents a large-scale analysis of contextualized smartphone usage in real life. We introduce two contextual variables that condition the use of smartphone applications, namely places and social context. Our study shows strong dependencies between phone usage and the two contextual cues, which are automatically extracted based on multiple built-in sensors available on the phone. By analyzing continuous data collected on a set of 77 participants from a European country over 9 months of actual usage, our framework automatically reveals key patterns of phone application usage that would traditionally be obtained through manual logging or questionnaire. Our findings contribute to the large-scale understanding of applications and context, bringing out design implications for interfaces on smartphones.",
"title": ""
},
{
"docid": "a2adeb9448c699bbcbb10d02a87e87a5",
"text": "OBJECTIVE\nTo quantify the presence of health behavior theory constructs in iPhone apps targeting physical activity.\n\n\nMETHODS\nThis study used a content analysis of 127 apps from Apple's (App Store) Health & Fitness category. Coders downloaded the apps and then used an established theory-based instrument to rate each app's inclusion of theoretical constructs from prominent behavior change theories. Five common items were used to measure 20 theoretical constructs, for a total of 100 items. A theory score was calculated for each app. Multiple regression analysis was used to identify factors associated with higher theory scores.\n\n\nRESULTS\nApps were generally observed to be lacking in theoretical content. Theory scores ranged from 1 to 28 on a 100-point scale. The health belief model was the most prevalent theory, accounting for 32% of all constructs. Regression analyses indicated that higher priced apps and apps that addressed a broader activity spectrum were associated with higher total theory scores.\n\n\nCONCLUSION\nIt is not unexpected that apps contained only minimal theoretical content, given that app developers come from a variety of backgrounds and many are not trained in the application of health behavior theory. The relationship between price and theory score corroborates research indicating that higher quality apps are more expensive. There is an opportunity for health and behavior change experts to partner with app developers to incorporate behavior change theories into the development of apps. These future collaborations between health behavior change experts and app developers could foster apps superior in both theory and programming possibly resulting in better health outcomes.",
"title": ""
}
] |
[
{
"docid": "28cdd3fafd052941c496d246e0df244b",
"text": "Writing Windows NT device drivers can be a daunting task. Device drivers must be fully re-entrant, must use only limited resources and must be created with special development environments. Executing device drivers in user-mode offers significant coding advantages. User-mode device drivers have access to all user-mode libraries and applications. They can be developed using standard development tools and debugged on a single machine. Using the Proxy Driver to retrieve I/O requests from the kernel, user-mode drivers can export full device services to the kernel and applications. User-mode device drivers offer enormous flexibility for emulating devices and experimenting with new file systems. Experimental results show that in many cases, the overhead of moving to user-mode for processing I/O can be masked by the inherent costs of accessing physical devices.",
"title": ""
},
{
"docid": "310aa30e2dd2b71c09780f7984a3663c",
"text": "E-governance is more than just a government website on the Internet. The strategic objective of e-governance is to support and simplify governance for all parties; government, citizens and businesses. The use of ICTs can connect all three parties and support processes and activities. In other words, in e-governance electronic means support and stimulate good governance. Therefore, the objectives of e-governance are similar to the objectives of good governance. Good governance can be seen as an exercise of economic, political, and administrative authority to better manage affairs of a country at all levels. It is not difficult for people in developed countries to imagine a situation in which all interaction with government can be done through one counter 24 hours a day, 7 days a week, without waiting in lines. However to achieve this same level of efficiency and flexibility for developing countries is going to be difficult. The experience in developed countries shows that this is possible if governments are willing to decentralize responsibilities and processes, and if they start to use electronic means. This paper is going to examine the legal and infrastructure issues related to e-governance from the perspective of developing countries. Particularly it will examine how far the developing countries have been successful in providing a legal framework.",
"title": ""
},
{
"docid": "ac168ff92c464cb90a9a4ca0eb5bfa5c",
"text": "Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.",
"title": ""
},
{
"docid": "4de437aa5fe1b27ebba232f0efe82b02",
"text": "Most people do not interact with Semantic Web data directly. Unless they have the expertise to understand the underlying technology, they need textual or visual interfaces to help them make sense of it. We explore the problem of generating natural language summaries for Semantic Web data. This is non-trivial, especially in an open-domain context. To address this problem, we explore the use of neural networks. Our system encodes the information from a set of triples into a vector of fixed dimensionality and generates a textual summary by conditioning the output on the encoded vector. We train and evaluate our models on two corpora of loosely aligned Wikipedia snippets and DBpedia and Wikidata triples with promising results.",
"title": ""
},
{
"docid": "ada881da62d4ceff774cce82dde3c738",
"text": "Characterizing information diffusion on social platforms like Twitter enables us to understand the properties of underlying media and model communication patterns. As Twitter gains in popularity, it has also become a venue to broadcast rumors and misinformation. We use epidemiological models to characterize information cascades in twitter resulting from both news and rumors. Specifically, we use the SEIZ enhanced epidemic model that explicitly recognizes skeptics to characterize eight events across the world and spanning a range of event types. We demonstrate that our approach is accurate at capturing diffusion in these events. Our approach can be fruitfully combined with other strategies that use content modeling and graph theoretic features to detect (and possibly disrupt) rumors.",
"title": ""
},
{
"docid": "8e6ceaadcad931afcf9b9f2f17deb4fb",
"text": "We formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck. Given that natural language is highly context-dependent, this further implies that in practice Softmax with distributed word embeddings does not have enough capacity to model natural language. We propose a simple and effective method to address this issue, and improve the state-of-the-art perplexities on Penn Treebank and WikiText-2 to 47.69 and 40.68 respectively.1",
"title": ""
},
{
"docid": "7b314cd0c326cb977b92f4907a0ed737",
"text": "This is the third part of a series of papers that provide a comprehensive survey of the techniques for tracking maneuvering targets without addressing the so-called measurement-origin uncertainty. Part I [1] and Part II [2] deal with general target motion models and ballistic target motion models, respectively. This part surveys measurement models, including measurement model-based techniques, used in target tracking. Models in Cartesian, sensor measurement, their mixed, and other coordinates are covered. The stress is on more recent advances — topics that have received more attention recently are discussed in greater details.",
"title": ""
},
{
"docid": "88398c81a8706b97f427c12d63ec62cc",
"text": "In processing human produced text using natural language processing (NLP) techniques, two fundamental subtasks that arise are (i) segmentation of the plain text into meaningful subunits (e.g., entities), and (ii) dependency parsing, to establish relations between subunits. Such structural interpretation of text provides essential building blocks for upstream expert system tasks: e.g., from interpreting textual real estate ads, one may want to provide an accurate price estimate and/or provide selection filters for end users looking for a particular property — which all could rely on knowing the types and number of rooms, etc. In this paper we develop a relatively simple and effective neural joint model that performs both segmentation and dependency parsing together, instead of one after the other as in most state-of-the-art works. We will focus in particular on the real estate ad setting, aiming to convert an ad to a structured description, which we name property tree, comprising the tasks of (1) identifying important entities of a property (e.g., rooms) from classifieds and (2) structuring them into a tree format. In this work, we propose a new joint model that is able to tackle the two tasks simultaneously and construct the property tree by (i) avoiding the error propagation that would arise from the subtasks one after the other in a pipelined fashion, and (ii) exploiting the interactions between the subtasks. For this purpose, we perform an extensive comparative study of the pipeline methods and the new proposed ∗Corresponding author Email addresses: [email protected] (Giannis Bekoulis), [email protected] (Johannes Deleu), [email protected] (Thomas Demeester), [email protected] (Chris Develder) Preprint submitted to Expert Systems with Applications February 23, 2018 joint model, reporting an improvement of over three percentage points in the overall edge F1 score of the property tree. Also, we propose attention methods, to encourage our model to focus on salient tokens during the construction of the property tree. Thus we experimentally demonstrate the usefulness of attentive neural architectures for the proposed joint model, showcasing a further improvement of two percentage points in edge F1 score for our application. While the results demonstrated are for the particular real estate setting, the model is generic in nature, and thus could be equally applied to other expert system scenarios requiring the general tasks of both (i) detecting entities (segmentation) and (ii) establishing relations among them (dependency parsing).",
"title": ""
},
{
"docid": "293f102f8e6cedb4b93856224f081272",
"text": "In this paper, we propose a decision-based, signal-adaptive median filtering algorithm for removal of impulse noise. Our algorithm achieves accurate noise detection and high SNR measures without smearing the fine details and edges in the image. The notion of homogeneity level is defined for pixel values based on their global and local statistical properties. The cooccurrence matrix technique is used to represent the correlations between a pixel and its neighbors, and to derive the upper and lower bound of the homogeneity level. Noise detection is performed at two stages: noise candidates are first selected using the homogeneity level, and then a refining process follows to eliminate false detections. The noise detection scheme does not use a quantitative decision measure, but uses qualitative structural information, and it is not subject to burdensome computations for optimization of the threshold values. Empirical results indicate that our scheme performs significantly better than other median filters, in terms of noise suppression and detail preservation.",
"title": ""
},
{
"docid": "5935224c53222d0234adffddae23eb04",
"text": "The multipath-rich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is time-varying, location-sensitive, and uniquely shared by a given transmitter-receiver pair. The complexity associated with a richly scattering environment implies that the short-term fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is well-suited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a self-authenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b/s.",
"title": ""
},
{
"docid": "4d959fc84483618a1ea6648b16d2e4d2",
"text": "In this themed issue of the Journal of Sport & Exercise Psychology, we bring together an eclectic mix of papers focusing on how expert performers learn the skills needed to compete at the highest level in sport. In the preface, we highlight the value of adopting the expert performance approach as a systematic framework for the evaluation and development of expertise and expert performance in sport. We then place each of the empirical papers published in this issue into context and briefly outline their unique contributions to knowledge in this area. Finally, we highlight several potential avenues for future research in the hope of encouraging others to scientifically study how experts acquire the mechanisms mediating superior performance in sport and how coaches can draw on this knowledge to guide their athletes toward the most effective training activities.",
"title": ""
},
{
"docid": "40cd4d0863ed757709530af59e928e3b",
"text": "Kynurenic acid (KYNA) is an endogenous antagonist of ionotropic glutamate receptors and the α7 nicotinic acetylcholine receptor, showing anticonvulsant and neuroprotective activity. In this study, the presence of KYNA in food and honeybee products was investigated. KYNA was found in all 37 tested samples of food and honeybee products. The highest concentration of KYNA was obtained from honeybee products’ samples, propolis (9.6 nmol/g), honey (1.0–4.8 nmol/g) and bee pollen (3.4 nmol/g). A high concentration was detected in fresh broccoli (2.2 nmol/g) and potato (0.7 nmol/g). Only traces of KYNA were found in some commercial baby products. KYNA administered intragastrically in rats was absorbed from the intestine into the blood stream and transported to the liver and to the kidney. In conclusion, we provide evidence that KYNA is a constituent of food and that it can be easily absorbed from the digestive system.",
"title": ""
},
{
"docid": "06413e71fbbe809ee2ffbdb31dc8fe59",
"text": "This paper takes a critical look at the features used in the semantic role tagging literature and show that the information in the input, generally a syntactic parse tree, has yet to be fully exploited. We propose an additional set of features and our experiments show that these features lead to fairly significant improvements in the tasks we performed. We further show that different features are needed for different subtasks. Finally, we show that by using a Maximum Entropy classifier and fewer features, we achieved results comparable with the best previously reported results obtained with SVM models. We believe this is a clear indication that developing features that capture the right kind of information is crucial to advancing the stateof-the-art in semantic analysis.",
"title": ""
},
{
"docid": "c07f7baed3648b190eca0f4753027b57",
"text": "Objective: An autoencoder-based framework that simultaneously reconstruct and classify biomedical signals is proposed. Previous work has treated reconstruction and classification as separate problems. This is the first study that proposes a combined framework to address the issue in a holistic fashion. Methods: For telemonitoring purposes, reconstruction techniques of biomedical signals are largely based on compressed sensing (CS); these are “designed” techniques where the reconstruction formulation is based on some “assumption” regarding the signal. In this study, we propose a new paradigm for reconstruction—the reconstruction is “learned,” using an autoencoder; it does not require any assumption regarding the signal as long as there is sufficiently large training data. But since the final goal is to analyze/classify the signal, the system can also learn a linear classification map that is added inside the autoencoder. The ensuing optimization problem is solved using the Split Bregman technique. Results: Experiments were carried out on reconstructing and classifying electrocardiogram (ECG) (arrhythmia classification) and EEG (seizure classification) signals. Conclusion: Our proposed tool is capable of operating in a semi-supervised fashion. We show that our proposed method is better in reconstruction and more than an order magnitude faster than CS based methods; it is capable of real-time operation. Our method also yields better results than recently proposed classification methods. Significance: This is the first study offering an alternative to CS-based reconstruction. It also shows that the representation learning approach can yield better results than traditional methods that use hand-crafted features for signal analysis.",
"title": ""
},
{
"docid": "cf580b65d1caed22e21f1bdd69cdd9f0",
"text": "This paper focuses on the problem of script identification in unconstrained scenarios. Script identification is an important prerequisite to recognition, and an indispensable condition for automatic text understanding systems designed for multi-language environments. Although widely studied for document images and handwritten documents, it remains an almost unexplored territory for scene text images. We detail a novel method for script identification in natural images that combines convolutional features and the Naive-Bayes Nearest Neighbor classifier. The proposed framework efficiently exploits the discriminative power of small stroke-parts, in a fine-grained classification framework. In addition, we propose a new public benchmark dataset for the evaluation of joint text detection and script identification in natural scenes. Experiments done in this new dataset demonstrate that the proposed method yields state of the art results, while it generalizes well to different datasets and variable number of scripts. The evidence provided shows that multi-lingual scene text recognition in the wild is a viable proposition. Source code of the proposed method is made available online.",
"title": ""
},
{
"docid": "8a08bb5a952589615c9054d4fc0e8c1f",
"text": "The classical plain-text representation of source code is c onvenient for programmers but requires parsing to uncover t he deep structure of the program. While sophisticated software too ls parse source code to gain access to the program’s structur e, many lightweight programming aids such as grep rely instead on only the lexical structure of source code. I d escribe a new XML application that provides an alternative representation o f Java source code. This XML-based representation, called J avaML, is more natural for tools and permits easy specification of nume rous software-engineering analyses by leveraging the abun dance of XML tools and techniques. A robust converter built with th e Jikes Java compiler framework translates from the classic l Java source code representation to JavaML, and an XSLT style sheet converts from JavaML back into the classical textual f orm.",
"title": ""
},
{
"docid": "42ea7c0ba51c3d0da09e15b61592eb86",
"text": "While labeled data is expensive to prepare, ever increasing amounts of unlabeled data is becoming widely available. In order to adapt to this phenomenon, several semi-supervised learning (SSL) algorithms, which learn from labeled as well as unlabeled data, have been developed. In a separate line of work, researchers have started to realize that graphs provide a natural way to represent data in a variety of domains. Graph-based SSL algorithms, which bring together these two lines of work, have been shown to outperform the state-of-the-art in many applications in speech processing, computer vision, natural language processing, and other areas of Artificial Intelligence. Recognizing this promising and emerging area of research, this synthesis lecture focuses on graphbased SSL algorithms (e.g., label propagation methods). Our hope is that after reading this book, the reader will walk away with the following: (1) an in-depth knowledge of the current stateof-the-art in graph-based SSL algorithms, and the ability to implement them; (2) the ability to decide on the suitability of graph-based SSL methods for a problem; and (3) familiarity with different applications where graph-based SSL methods have been successfully applied.",
"title": ""
},
{
"docid": "f4d1a3530cb84b2efa9d5a2a63e66d2f",
"text": "Gallium-Nitride technology is known for its high power density and power amplifier designs, but is also very well suited to realize robust receiver components. This paper presents the design and measurement of a robust AlGaN/GaN Low Noise Amplifier and Transmit/Receive Switch MMIC. Two versions of both MMICs have been designed in the Alcatel-Thales III-V lab AlGaN/GaN microstrip technology. One chipset version operates at X-band and the second also shows wideband performance. Input power handling of >46 dBm for the switch and >41 dBm for the LNA have been measured.",
"title": ""
},
{
"docid": "58156df07590448d89c2b8d4a46696ad",
"text": "Gene PmAF7DS confers resistance to wheat powdery mildew (isolate Bgt#211 ); it was mapped to a 14.6-cM interval ( Xgwm350 a– Xbarc184 ) on chromosome 7DS. The flanking markers could be applied in MAS breeding. Wheat powdery mildew (Pm) is caused by the biotrophic pathogen Blumeria graminis tritici (DC.) (Bgt). An ongoing threat of breakdown of race-specific resistance to Pm requires a continuous effort to discover new alleles in the wheat gene pool. Developing new cultivars with improved disease resistance is an economically and environmentally safe approach to reduce yield losses. To identify and characterize genes for resistance against Pm in bread wheat we used the (Arina × Forno) RILs population. Initially, the two parental lines were screened with a collection of 61 isolates of Bgt from Israel. Three Pm isolates Bgt#210 , Bgt#211 and Bgt#213 showed differential reactions in the parents: Arina was resistant (IT = 0), whereas Forno was moderately susceptible (IT = −3). Isolate Bgt#211 was then used to inoculate the RIL population. The segregation pattern of plant reactions among the RILs indicates that a single dominant gene controls the conferred resistance. A genetic map of the region containing this gene was assembled with DNA markers and assigned to the 7D physical bin map. The gene, temporarily designated PmAF7DS, was located in the distal region of chromosome arm 7DS. The RILs were also inoculated with Bgt#210 and Bgt#213. The plant reactions to these isolates showed high identity with the reaction to Bgt#211, indicating the involvement of the same gene or closely linked, but distinct single genes. The genomic location of PmAF7DS, in light of other Pm genes on 7DS is discussed.",
"title": ""
}
] |
scidocsrr
|
5a7b3c01676ddff1778e50464215acdc
|
Impaired Judgments of Sadness But Not Happiness Following Bilateral Amygdala Damage
|
[
{
"docid": "4f3066f6d45bc48cfe655642f384e09a",
"text": "There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of categorical perception. In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, surprise expressions lie between happiness and fear expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain.",
"title": ""
}
] |
[
{
"docid": "65508d1dcd73f38469400b16c0fe8b34",
"text": "Understanding the performance of distributed systems requires correlation of thousands of interactions between numerous components — a task best left to a computer. Today’s systems provide voluminous traces from each component but do not synthesise the data into concise models of system performance. We argue that online performance modelling should be a ubiquitous operating system service and outline several uses including performance debugging, capacity planning, system tuning and anomaly detection. We describe the Magpie modelling service which collates detailed traces from multiple machines in an e-commerce site, extracts request-specific audit trails, and constructs probabilistic models of request behaviour. A feasibility study evaluates the approach using an offline demonstrator. Results show that the approach is promising, but that there are many challenges to building a truly ubiquitious, online modelling infrastructure.",
"title": ""
},
{
"docid": "d325a736ecaf41e2ee1e59b505f66c2b",
"text": "Relational leadership is a relatively new term in the leadership literature, and because of this, its meaning is open to interpretation. In the present article I describe two perspectives of relational leadership: an entity perspective that focuses on identifying attributes of individuals as they engage in interpersonal relationships, and a relational perspective that views leadership as a process of social construction through which certain understandings of leadership come about and are given privileged ontology. These approaches can be complementary, but their implications for study and practice are quite different. After reviewing leadership research relative to these two perspectives I offer Relational Leadership Theory (RLT) as an overarching framework for the study of leadership as a social influence process through which emergent coordination (e.g., evolving social order) and change (e.g., new approaches, values, attitudes, behaviors, ideologies) are constructed and produced. This framework addresses relationships both as an outcome of investigation (e.g., How are leadership relationships produced?) and a context for action (e.g., How do relational dynamics contribute to structuring?). RLT draws from both entity and relational ontologies and methodologies to more fully explore the relational dynamics of leadership and organizing.",
"title": ""
},
{
"docid": "bfa659ff24af7c319702a6a8c0c7dca3",
"text": "In this letter, a grounded coplanar waveguide-to-microstrip (GCPW-to-MS) transition without via holes is presented. The transition is designed on a PET® substrate and fabricated using inkjet printing technology. To our knowledge, fabrication of transitions using inkjet printing technology has not been reported in the literature. The simulations have been performed using HFSS® software and the measurements have been carried out using a Vector Network Analyzer on a broad frequency band from 40 to 85 GHz. The effect of varying several geometrical parameters of the GCPW-to-MS on the electromagnetic response is also presented. The results obtained demonstrate good characteristics of the insertion loss better than 1.5 dB, and return loss larger than 10 dB in the V-band (50-75 GHz). Such transitions are suitable for characterization of microwave components built on different flexible substrates.",
"title": ""
},
{
"docid": "948b157586c75674e75bd50b96162861",
"text": "We propose a database design methodology for NoSQL systems. The approach is based on NoAM (NoSQL Abstract Model), a novel abstrac t d ta model for NoSQL databases, which exploits the commonalities of various N SQL systems and is used to specify a system-independent representatio n of the application data. This intermediate representation can be then implemented in target NoSQL databases, taking into account their specific features. Ov rall, the methodology aims at supporting scalability, performance, and consisten cy, as needed by next-generation web applications.",
"title": ""
},
{
"docid": "31e544afdb09bd6d6751c2c522436691",
"text": "Current database trigger systems have extremely limited scalability. This paper proposes a way to develop a truly scalable trigger system. Scalability to large numbers of triggers is achieved with a trigger cache to use main memory effectively, and a memory-conserving selection predicate index based on the use of unique expression formats called expression signatures. A key observation is that if a very large number of triggers are created, many will have the same structure, except for the appearance of different constant values. When a trigger is created, tuples are added to special relations created for expression signatures to hold the trigger’s constants. These tables can be augmented with a database index or main-memory index structure to serve as a predicate index. The design presented also uses a number of types of concurrency to achieve scalability, including token (tuple)-level, condition-level, rule action-level, and data-",
"title": ""
},
{
"docid": "78abbde692e13c6075269ac82b3f1123",
"text": "Smart Metering is one of the key issues in modern energy efficiency technologies. Several efforts have been recently made in developing suitable communication protocols for metering data management and transmission, and the Metering-Bus (M-Bus) is a relevant standard example, with a wide diffusion in the European market. This paper deals with its wireless evolution, namely Wireless M-Bus (WM-Bus), and in particular looks at it from the energy consumption perspective. Indeed, specially in those applicative scenarios where the grid powering is not available, like in water and gas metering settings, it is fundamental to guarantee the sustainability of the meter itself, by means of long-life batteries or suitable energy harvesting technologies. The present work analyzes all these aspects directly referring to a specific HW/SW implementation of the WM-Bus variants, providing some useful guidelines for its application in the smart water grid context.",
"title": ""
},
{
"docid": "fbccc0838d2aa84a882361d076c0f108",
"text": "Determinantal point processes (DPPs) are popular probabilistic models that arise in many machine learning tasks, where distributions of diverse sets are characterized by matrix determinants. In this paper, we develop fast algorithms to find the most likely configuration (MAP) of large-scale DPPs, which is NP-hard in general. Due to the submodular nature of the MAP objective, greedy algorithms have been used with empirical success. Greedy implementations require computation of log-determinants, matrix inverses or solving linear systems at each iteration. We present faster implementations of the greedy algorithms by utilizing the complementary benefits of two log-determinant approximation schemes: (a) first-order expansions to the matrix log-determinant function and (b) high-order expansions to the scalar log function with stochastic trace estimators. In our experiments, our algorithms are orders of magnitude faster than their competitors, while sacrificing marginal accuracy.",
"title": ""
},
{
"docid": "16946ba4be3cf8683bee676b5ac5e0de",
"text": "1. The types of perfect Interpretation-wise, several types of perfect expressions have been recognized in the literature (e. To illustrate, a present perfect can have one of at least three interpretations: (1) a. Since 2000, Alexandra has lived in LA. UNIVERSAL b. Alexandra has been in LA (before). EXPERIENTIAL c. Alexandra has (just) arrived in LA. RESULTATIVE The three types of perfect make different claims about the temporal location of the underlying eventuality, i.e., of live in LA in (1a), be in LA in (1b), arrive in LA in (1c), with respect to a reference time. The UNIVERSAL perfect, as in (1a), asserts that the underlying eventuality holds throughout an interval, delimited by the time of utterance and a certain time in the past (in this case, the year 2000). The EXPERIENTIAL perfect, as in (1b), asserts that the underlying eventuality holds at a proper subset of an interval, extending back from the utterance time. The RESULTATIVE perfect makes the same assertion as the Experiential perfect, with the added meaning that the result of the underlying eventuality (be in LA is the result of arrive in LA) holds at the utterance time. The distinction between the Experiential and the Resultative perfects is rather subtle. The two are commonly grouped together as the EXISTENTIAL perfect (McCawley 1971, Mittwoch 1988) and this terminology is adopted here as well. 1 Two related questions arise: (i) Is the distinction between the three types of perfect grammatically based? (ii) If indeed so, then is it still possible to posit a common representation for the perfect – a uniform structure with a single meaning – which, in combination with certain other syntactic components , each with a specialized meaning, results in the three different readings? This paper suggests that the answer to both questions is yes. To start addressing these questions, let us look at some of the known factors behind the various interpretations of the perfect. It has to be noted that the different perfect readings are not a peculiarity of the present perfect despite the fact that they are primarily discussed in relation to that form. The same interpretations are available to the past, future and nonfinite per",
"title": ""
},
{
"docid": "913b84b1afc5f34eb107d9717529bf53",
"text": "With the rapid development of the peer-to-peer lending industry in China, it has been a crucial task to evaluate the default risk of each loan. Motivated by the research in natural language processing, we make use of the online operation behavior data of borrowers and propose a consumer credit scoring method based on attention mechanism LSTM, which is a novel application of deep learning algorithm. Inspired by the idea of Word2vec, we treat each type of event as a word, construct the Event2vec model to convert each type of event transformation into a vector and, then, use an attention mechanism LSTM network to predict the probability of user default. The method is evaluated on the real dataset, and the results show that the proposed solution can effectively increase the predictive accuracy compared with the traditional artificial feature extraction method and the standard LSTM model.",
"title": ""
},
{
"docid": "301715c650ee5f918ddeaf0c18889183",
"text": "Keyframe-based Learning from Demonstration has been shown to be an effective method for allowing end-users to teach robots skills. We propose a method for using multiple keyframe demonstrations to learn skills as sequences of positional constraints (c-keyframes) which can be planned between for skill execution. We also introduce an interactive GUI which can be used for displaying the learned c-keyframes to the teacher, for altering aspects of the skill after it has been taught, or for specifying a skill directly without providing kinesthetic demonstrations. We compare 3 methods of teaching c-keyframe skills: kinesthetic teaching, GUI teaching, and kinesthetic teaching followed by GUI editing of the learned skill (K-GUI teaching). Based on user evaluation, the K-GUI method of teaching is found to be the most preferred, and the GUI to be the least preferred. Kinesthetic teaching is also shown to result in more robust constraints than GUI teaching, and several use cases of K-GUI teaching are discussed to show how the GUI can be used to improve the results of kinesthetic teaching.",
"title": ""
},
{
"docid": "3d4fa878fe3e4d3cbeb1ccedd75ee913",
"text": "Digital images are widely communicated over the internet. The security of digital images is an essential and challenging task on shared communication channel. Various techniques are used to secure the digital image, such as encryption, steganography and watermarking. These are the methods for the security of digital images to achieve security goals, i.e. confidentiality, integrity and availability (CIA). Individually, these procedures are not quite sufficient for the security of digital images. This paper presents a blended security technique using encryption, steganography and watermarking. It comprises of three key components: (1) the original image has been encrypted using large secret key by rotating pixel bits to right through XOR operation, (2) for steganography, encrypted image has been altered by least significant bits (LSBs) of the cover image and obtained stego image, then (3) stego image has been watermarked in the time domain and frequency domain to ensure the ownership. The proposed approach is efficient, simpler and secured; it provides significant security against threats and attacks. Keywords—Image security; Encryption; Steganography; Watermarking",
"title": ""
},
{
"docid": "263af7eeb7266b449665aca6d67a0690",
"text": "Model compression is a critical technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted heuristics and rule-based policies that require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverage reinforcement learning to provide the model compression policy. This learning-based compression policy outperforms conventional rule-based compression policy by having higher compression ratio, better preserving the accuracy and freeing human labor. Under 4× FLOPs reduction, we achieved 2.7% better accuracy than the handcrafted model compression policy for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet and achieved 1.81× speedup of measured inference latency on an Android phone and 1.43× speedup on the Titan XP GPU, with only 0.1% loss of ImageNet Top-1 accuracy.",
"title": ""
},
{
"docid": "3a92798e81a03e5ef7fb18110e5da043",
"text": "BACKGROUND\nRespiratory failure is a serious complication that can adversely affect the hospital course and survival of multiply injured patients. Some studies have suggested that delayed surgical stabilization of spine fractures may increase the incidence of respiratory complications. However, the authors of these studies analyzed small sets of patients and did not assess the independent effects of multiple risk factors.\n\n\nMETHODS\nA retrospective cohort study was conducted at a regional level-I trauma center to identify risk factors for respiratory failure in patients with surgically treated thoracic and lumbar spine fractures. Demographic, diagnostic, and procedural variables were identified. The incidence of respiratory failure was determined in an adult respiratory distress syndrome registry maintained concurrently at the same institution. Univariate and multivariate analyses were used to determine independent risk factors for respiratory failure. An algorithm was formulated to predict respiratory failure.\n\n\nRESULTS\nRespiratory failure developed in 140 of the 1032 patients in the study cohort. Patients with respiratory failure were older; had a higher mean Injury Severity Score (ISS) and Charlson Comorbidity Index Score; had greater incidences of pneumothorax, pulmonary contusion, and thoracic level injury; had a lower mean Glasgow Coma Score (GCS); were more likely to have had a posterior surgical approach; and had a longer mean time from admission to surgical stabilization than the patients without respiratory failure (p < 0.05). Multivariate analysis identified five independent risk factors for respiratory failure: an age of more than thirty-five years, an ISS of > 25 points, a GCS of < or = 12 points, blunt chest injury, and surgical stabilization performed more than two days after admission. An algorithm was created to determine, on the basis of the number of preoperative predictors present, the relative risk of respiratory failure when surgery was delayed for more than two days.\n\n\nCONCLUSIONS\nIndependent risk factors for respiratory failure were identified in an analysis of a large cohort of patients who had undergone operative stabilization of thoracic and lumbar spine fractures. Early operative stabilization of these fractures, the only risk factor that can be controlled by the physician, may decrease the risk of respiratory failure in multiply injured patients.",
"title": ""
},
{
"docid": "ff4d6551c14eb366c1e316073a4832f5",
"text": "BACKGROUND\nThe importance of transformational leadership for the health and well-being of staff in the healthcare sector is increasingly acknowledged, however, there is less knowledge about the mechanisms that may explain the links between transformational leaders and employee health and well-being.\n\n\nOBJECTIVES\nTo examine two possible psychological mechanisms that link transformational leadership behaviours to employee job satisfaction and well-being.\n\n\nDESIGN\nCross-sectional study design.\n\n\nSETTINGS\nThe study took place in two elderly care centers in large Danish local government. Staff were predominantly healthcare assistants but also nurses and other healthcare-related professions participated in the study.\n\n\nPARTICIPANTS\n274 elderly care employees completed the questionnaire. Surveys were sent to all employees working at the centers. 91% were female, the average age was 45 years.\n\n\nMETHODS\nA questionnaire was distributed to all members of staff in the elderly care centers and where employees were asked to rate their line manager's leadership style and were asked to evaluate their own level of self-efficacy as well as the level of efficacy in their team (team efficacy) and their job satisfaction and psychological well-being.\n\n\nRESULTS\nBoth team and self-efficacy were found to act as mediators, however, their effects differed. Self-efficacy was found to fully mediate the relationship between transformational leadership and well-being and team efficacy was found to partially mediate the relationship between transformational leadership and job satisfaction and fully mediate the relationship between transformational leadership and well-being.\n\n\nCONCLUSIONS\nWithin the pressurised environment faced by employees in the healthcare sector today transformational leaders may help ensure employees' job satisfaction and psychological well-being. They do so through the establishment of a sense of being in control as individuals but also as being part of a competent group.",
"title": ""
},
{
"docid": "751bde322930a292e2ddc8ba06e24f17",
"text": "Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to learning from a massive amount of data. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition for utilizing knowledge whenever it is available or can be created purposefully. In this paper, we discuss the indispensable role of knowledge for deeper understanding of content where (i) large amounts of training data are unavailable, (ii) the objects to be recognized are complex, (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create relevant and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP techniques. Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques.",
"title": ""
},
{
"docid": "c70f8bd719642ed818efc5387ffb6b55",
"text": "In this work, we propose a novel framework for privacy-preserving client-distributed machine learning. It is motivated by the desire to achieve differential privacy guarantees in the local model of privacy in a way that satisfies all systems constraints using asynchronous client-server communication and provides attractive model learning properties. We call it “Draw and Discard” because it relies on random sampling of models for load distribution (scalability), which also provides additional server-side privacy protections and improved model quality through averaging. We present the mechanics of client and server components of “Draw and Discard” and demonstrate how the framework can be applied to learning Generalized Linear models. We then analyze the privacy guarantees provided by our approach against several types of adversaries and showcase experimental results that provide evidence for the framework’s viability in practical deployments. We believe our framework is the first deployed distributed machine learning approach that operates in the local privacy model.",
"title": ""
},
{
"docid": "64e5cad1b64f1412b406adddc98cd421",
"text": "We examine the influence of venture capital on patented inventions in the United States across twenty industries over three decades. We address concerns about causality in several ways, including exploiting a 1979 policy shift that spurred venture capital fundraising. We find that increases in venture capital activity in an industry are associated with significantly higher patenting rates. While the ratio of venture capital to R&D averaged less than 3% from 1983–1992, our estimates suggest that venture capital may have accounted for 8% of industrial innovations in that period.",
"title": ""
},
{
"docid": "4080a3a6d4272e44541a7082a311cacb",
"text": "Cyberbullying is a repeated act that harasses, humiliates, threatens, or hassles other people through electronic devices and online social networking websites. Cyberbullying through the internet is more dangerous than traditional bullying, because it can potentially amplify the humiliation to an unlimited online audience. According to UNICEF and a survey by the Indonesian Ministry of Communication and Information, 58% of 435 adolescents do not understand about cyberbullying. Some of them might even have been the bullies, but since they did not understand about cyberbullying they could not recognise the negative effects of their bullying. The bullies may not recognise the harm of their actions, because they do not see immediate responses from their victims. Our study aimed to detect cyberbullying actors based on texts and the credibility analysis of users and notify them about the harm of cyberbullying. We collected data from Twitter. Since the data were unlabelled, we built a web-based labelling tool to classify tweets into cyberbullying and non-cyberbullying tweets. We obtained 301 cyberbullying tweets, 399 non-cyberbullying tweets, 2,053 negative words and 129 swear words from the tool. Afterwards, we applied SVM and KNN to learn about and detect cyberbullying texts. The results show that SVM results in the highest f1-score, 67%. We also measured the credibility analysis of users and found 257 Normal Users, 45 Harmful Bullying Actors, 53 Bullying Actors and 6 Prospective Bullying Actors.",
"title": ""
},
{
"docid": "b6b5afb72393e89c211bac283e39d8a3",
"text": "In order to promote the use of mushrooms as source of nutrients and nutraceuticals, several experiments were performed in wild and commercial species. The analysis of nutrients included determination of proteins, fats, ash, and carbohydrates, particularly sugars by HPLC-RI. The analysis of nutraceuticals included determination of fatty acids by GC-FID, and other phytochemicals such as tocopherols, by HPLC-fluorescence, and phenolics, flavonoids, carotenoids and ascorbic acid, by spectrophotometer techniques. The antimicrobial properties of the mushrooms were also screened against fungi, Gram positive and Gram negative bacteria. The wild mushroom species proved to be less energetic than the commercial sp., containing higher contents of protein and lower fat concentrations. In general, commercial species seem to have higher concentrations of sugars, while wild sp. contained lower values of MUFA but also higher contents of PUFA. alpha-Tocopherol was detected in higher amounts in the wild species, while gamma-tocopherol was not found in these species. Wild mushrooms revealed a higher content of phenols but a lower content of ascorbic acid, than commercial mushrooms. There were no differences between the antimicrobial properties of wild and commercial species. The ongoing research will lead to a new generation of foods, and will certainly promote their nutritional and medicinal use.",
"title": ""
},
{
"docid": "17b28297ac057faaad52c559b272426c",
"text": "Microarray data analysis and classification has demonstrated convincingly that it provides an effective methodology for the effective diagnosis of diseases and cancers. Although much research has been performed on applying machine learning techniques for microarray data classification during the past years, it has been shown that conventional machine learning techniques have intrinsic drawbacks in achieving accurate and robust classifications. This paper presents a novel ensemble machine learning approach for the development of robust microarray data classification. Different from the conventional ensemble learning techniques, the approach presented begins with generating a pool of candidate base classifiers based on the gene sub-sampling and then the selection of a sub-set of appropriate base classifiers to construct the classification committee based on classifier clustering. Experimental results have demonstrated that the classifiers constructed by the proposed method outperforms not only the classifiers generated by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods (bagging and boosting).",
"title": ""
}
] |
scidocsrr
|
6a5783cc4e6a093f505b017eadcfd23b
|
Dissociable roles of prefrontal and anterior cingulate cortices in deception.
|
[
{
"docid": "908716e7683bdc78283600f63bd3a1b0",
"text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.",
"title": ""
}
] |
[
{
"docid": "b59965c405937a096186e41b2a3877c3",
"text": "The culmination of many years of increasing research into the toxicity of tau aggregation in neurodegenerative disease has led to the consensus that soluble, oligomeric forms of tau are likely the most toxic entities in disease. While tauopathies overlap in the presence of tau pathology, each disease has a unique combination of symptoms and pathological features; however, most study into tau has grouped tau oligomers and studied them as a homogenous population. Established evidence from the prion field combined with the most recent tau and amyloidogenic protein research suggests that tau is a prion-like protein, capable of seeding the spread of pathology throughout the brain. Thus, it is likely that tau may also form prion-like strains or diverse conformational structures that may differ by disease and underlie some of the differences in symptoms and pathology in neurodegenerative tauopathies. The development of techniques and new technology for the detection of tau oligomeric strains may, therefore, lead to more efficacious diagnostic and treatment strategies for neurodegenerative disease. [Formula: see text].",
"title": ""
},
{
"docid": "b705b194b79133957662c018ea6b1c7a",
"text": "Skew detection has been an important part of the document recognition system. A lot of techniques already exists and has currently been developing for detection of skew of scanned document images. This paper describes the skew detection and correction of scanned document images written in Assamese language using the horizontal and vertical projection profile analysis and brings out the differences after implementation of both the techniques.",
"title": ""
},
{
"docid": "1813c1cefbb5607660626b6c05c41960",
"text": "First described in 1925, giant condyloma acuminatum also known as Buschke-Löwenstein tumor (BLT) is a benign, slow-growing, locally destructive cauliflower-like lesion usually in the genital region. The disease is usually locally aggressive and destructive with a potential for malignant transformation. The causative organism is human papilloma virus. The most common risk factor is immunosuppression with HIV; however, any other cause of immunodeficiency can be a predisposing factor. We present a case of 33-year-old female patient, a known HIV patient on antiretroviral therapy for ten months. She presented with seven-month history of an abnormal growth in the genitalia that was progressive accompanied with foul smelling yellowish discharge and friable. Surgical excision was performed successfully. Pap smear of the excised tissue was negative. Despite being a rare condition, giant condyloma acuminatum is relatively common in HIV-infected patients.",
"title": ""
},
{
"docid": "0763497a09f54e2d49a03e262dcc7b6e",
"text": "Content-based subscription systems are an emerging alternative to traditional publish-subscribe systems, because they permit more flexible subscriptions along multiple dimensions. In these systems, each subscription is a predicate which may test arbitrary attributes within an event. However, the matching problem for content-based systems — determining for each event the subset of all subscriptions whose predicates match the event — is still an open problem. We present an efficient, scalable solution to the matching problem. Our solution has an expected time complexity that is sub-linear in the number of subscriptions, and it has a space complexity that is linear. Specifically, we prove that for predicates reducible to conjunctions of elementary tests, the expected time to match a random event is no greater than O(N ) where N is the number of subscriptions, and is a closed-form expression that depends on the number and type of attributes (in some cases, 1=2). We present some optimizations to our algorithms that improve the search time. We also present the results of simulations that validate the theoretical bounds and that show acceptable performance levels for tens of thousands of subscriptions. Department of Computer Science, Cornell University, Ithaca, N.Y. 14853-7501, [email protected] IBM T.J. Watson Research Center, Yorktown Heights, N.Y. 10598, fstrom, sturman, [email protected] Department of Computer Science, University of Illinois at Urbana-Champaign, 1304 W. Springfield Ave, Urbana, I.L. 61801, [email protected]",
"title": ""
},
{
"docid": "037fb8eb72b55b8dae1aee107eb6b15c",
"text": "Traditional methods on video summarization are designed to generate summaries for single-view video records, and thus they cannot fully exploit the mutual information in multi-view video records. In this paper, we present a multiview metric learning framework for multi-view video summarization. It combines the advantages of maximum margin clustering with the disagreement minimization criterion. The learning framework thus has the ability to find a metric that best separates the input data, and meanwhile to force the learned metric to maintain underlying intrinsic structure of data points, for example geometric information. Facilitated by such a framework, a systematic solution to the multi-view video summarization problem is developed from the viewpoint of metric learning. The effectiveness of the proposed method is demonstrated by experiments.",
"title": ""
},
{
"docid": "76c31d0f392b81658270805daaff661d",
"text": "One of the major challenges of model-free visual tracking problem has been the difficulty originating from the unpredictable and drastic changes in the appearance of objects we target to track. Existing methods tackle this problem by updating the appearance model on-line in order to adapt to the changes in the appearance. Despite the success of these methods however, inaccurate and erroneous updates of the appearance model result in a tracker drift. In this paper, we introduce a novel visual tracking algorithm based on a template selection strategy constructed by deep reinforcement learning methods. The tracking algorithm utilizes this strategy to choose the best template for tracking a given frame. The template selection strategy is selflearned by utilizing a simple policy gradient method on numerous training episodes randomly generated from a tracking benchmark dataset. Our proposed reinforcement learning framework is generally applicable to other confidence map based tracking algorithms. The experiment shows that our tracking algorithm effectively decides the best template for visual tracking.",
"title": ""
},
{
"docid": "7c974eacb24368a0c5acfeda45d60f64",
"text": "We propose a novel approach for verifying model hypotheses in cluttered and heavily occluded 3D scenes. Instead of verifying one hypothesis at a time, as done by most state-of-the-art 3D object recognition methods, we determine object and pose instances according to a global optimization stage based on a cost function which encompasses geometrical cues. Peculiar to our approach is the inherent ability to detect significantly occluded objects without increasing the amount of false positives, so that the operating point of the object recognition algorithm can nicely move toward a higher recall without sacrificing precision. Our approach outperforms state-of-the-art on a challenging dataset including 35 household models obtained with the Kinect sensor, as well as on the standard 3D object recognition benchmark dataset.",
"title": ""
},
{
"docid": "ac8cef535e5038231cdad324325eaa37",
"text": "There are mainly two types of Emergent Self-Organizing Maps (ESOM) grid structures in use: hexgrid (honeycomb like) and quadgrid (trellis like) maps. In addition to that, the shape of the maps may be square or rectangular. This work investigates the effects of these different map layouts. Hexgrids were found to have no convincing advantage over quadgrids. Rectangular maps, however, are distinctively superior to square maps. Most surprisingly, rectangular maps outperform square maps for isotropic data, i.e. data sets with no particular primary direction.",
"title": ""
},
{
"docid": "921d9dc34f32522200ddcd606d22b6b4",
"text": "The covariancematrix adaptation evolution strategy (CMA-ES) is one of themost powerful evolutionary algorithms for real-valued single-objective optimization. In this paper, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We first introduce a single-objective, elitist CMA-ES using plus-selection and step size control based on a success rule. This algorithm is compared to the standard CMA-ES. The elitist CMA-ES turns out to be slightly faster on unimodal functions, but is more prone to getting stuck in sub-optimal local minima. In the new multi-objective CMAES (MO-CMA-ES) a population of individuals that adapt their search strategy as in the elitist CMA-ES is maintained. These are subject to multi-objective selection. The selection is based on non-dominated sorting using either the crowding-distance or the contributing hypervolume as second sorting criterion. Both the elitist single-objective CMA-ES and the MO-CMA-ES inherit important invariance properties, in particular invariance against rotation of the search space, from the original CMA-ES. The benefits of the new MO-CMA-ES in comparison to the well-known NSGA-II and to NSDE, a multi-objective differential evolution algorithm, are experimentally shown.",
"title": ""
},
{
"docid": "246866da7509b2a8a2bda734a664de9c",
"text": "In this paper we present an approach of procedural game content generation that focuses on a gameplay loops formal language (GLFL). In fact, during an iterative game design process, game designers suggest modifications that often require high development costs. The proposed language and its operational semantic allow reducing the gap between game designers' requirement and game developers' needs, enhancing therefore video games productivity. Using gameplay loops concept for game content generation offers a low cost solution to adjust game challenges, objectives and rewards in video games. A pilot experiment have been conducted to study the impact of this approach on game development.",
"title": ""
},
{
"docid": "1d50d61d6b0abb0d5bec74d613ffe172",
"text": "We propose a novel hardware-accelerated voxelization algorithm for polygonal models. Compared with previous approaches, our algorithm has a major advantage that it guarantees the conservative correctness in voxelization: every voxel intersecting the input model is correctly recognized. This property is crucial for applications like collision detection, occlusion culling and visibility processing. We also present an efficient and robust implementation of the algorithm in the GPU. Experiments show that our algorithm has a lower memory consumption than previous approaches and is more efficient when the volume resolution is high. In addition, our algorithm requires no preprocessing and is suitable for voxelizing deformable models.",
"title": ""
},
{
"docid": "a7e8c3a64f6ba977e142de9b3dae7e57",
"text": "Craniofacial superimposition is a process that aims to identify a person by overlaying a photograph and a model of the skull. This process is usually carried out manually by forensic anthropologists; thus being very time consuming and presenting several difficulties in finding a good fit between the 3D model of the skull and the 2D photo of the face. In this paper we present a fast and automatic procedure to tackle the superimposition problem. The proposed method is based on real-coded genetic algorithms. Synthetic data are used to validate the method. Results on a real case from our Physical Anthropology lab of the University of Granada are also presented.",
"title": ""
},
{
"docid": "220d7b64db1731667e57ed318d2502ce",
"text": "Neutrophils infiltration/activation following wound induction marks the early inflammatory response in wound repair. However, the role of the infiltrated/activated neutrophils in tissue regeneration/proliferation during wound repair is not well understood. Here, we report that infiltrated/activated neutrophils at wound site release pyruvate kinase M2 (PKM2) by its secretive mechanisms during early stages of wound repair. The released extracellular PKM2 facilitates early wound healing by promoting angiogenesis at wound site. Our studies reveal a new and important molecular linker between the early inflammatory response and proliferation phase in tissue repair process.",
"title": ""
},
{
"docid": "1d7ee43299e3a7581d11604f1596aeab",
"text": "We analyze the impact of corruption on bilateral trade, highlighting its dual role in terms of extortion and evasion. Corruption taxes trade, when corrupt customs officials in the importing country extort bribes from exporters (extortion effect); however, with high tariffs, corruption may be trade enhancing when corrupt officials allow exporters to evade tariff barriers (evasion effect). We derive and estimate a corruption-augmented gravity model, where the effect of corruption on trade flows is ambiguous and contingent on tariffs. Empirically, corruption taxes trade in the majority of cases, but in high-tariff environments (covering 5% to 14% of the observations) their marginal effect is trade enhancing.",
"title": ""
},
{
"docid": "da607ab67cb9c1e1d08a70b15f9470d7",
"text": "Network embedding (NE) is playing a critical role in network analysis, due to its ability to represent vertices with efficient low-dimensional embedding vectors. However, existing NE models aim to learn a fixed context-free embedding for each vertex and neglect the diverse roles when interacting with other vertices. In this paper, we assume that one vertex usually shows different aspects when interacting with different neighbor vertices, and should own different embeddings respectively. Therefore, we present ContextAware Network Embedding (CANE), a novel NE model to address this issue. CANE learns context-aware embeddings for vertices with mutual attention mechanism and is expected to model the semantic relationships between vertices more precisely. In experiments, we compare our model with existing NE models on three real-world datasets. Experimental results show that CANE achieves significant improvement than state-of-the-art methods on link prediction and comparable performance on vertex classification. The source code and datasets can be obtained from https://github.com/ thunlp/CANE.",
"title": ""
},
{
"docid": "0962dfe13c1960b345bb0abb480f1520",
"text": "This electronic document presents the application of a novel method of bipedal walking pattern generation assured by “the liquid level model” and the preview control of zero-moment-point (ZMP). In this method, the trajectory of the center of mass (CoM) of the robot is generated assured by the preview controller to maintain the ZMP at the desired location knowing that the robot is modeled as a running liquid level model on a tank. The proposed approach combines the preview control theory with simple model “the liquid level model”, to assure a stable dynamic walking. Simulations results show that the proposed pattern generator guarantee not only to walk dynamically stable but also good performance.",
"title": ""
},
{
"docid": "9415adaa3ec2f7873a23cc2017a2f1ee",
"text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.",
"title": ""
},
{
"docid": "05540e05370b632f8b8cd165ae7d1d29",
"text": "We describe FreeCam a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The FreeCam sensing hardware consists of a small number of static color video cameras and state-of-the-art Kinect depth sensors, and the FreeCam software uses a number of advanced GPU processing and rendering techniques to seamlessly merge the input streams, providing a pleasant user experience. A system such as FreeCam is critical for applications such as telepresence, 3D video-conferencing and interactive 3D TV. FreeCam may also be used to produce multi-view video, which is critical to drive newgeneration autostereoscopic lenticular 3D displays.",
"title": ""
},
{
"docid": "77ce917536f59d5489d0d6f7000c7023",
"text": "In this supplementary document, we present additional results to complement the paper. First, we provide the detailed configurations and parameters of the generator and discriminator in the proposed Generative Adversarial Network. Second, we present the qualitative comparisons with the state-ofthe-art CNN-based optical flow methods. The complete results and source code are publicly available on http://vllab.ucmerced.edu/wlai24/semiFlowGAN.",
"title": ""
}
] |
scidocsrr
|
cfe1dd6ca8441b2c694ac3d856e9f5fb
|
Using boosted trees for click-through rate prediction for sponsored search
|
[
{
"docid": "3734fd47cf4e4e5c00f660cbb32863f0",
"text": "We describe a new Bayesian click-through rate (CTR) prediction algorithm used for Sponsored Search in Microsoft’s Bing search engine. The algorithm is based on a probit regression model that maps discrete or real-valued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored search where the predictions made by the algorithm decide about future training sample composition. Finally, we show experimental results from the production system and compare to a calibrated Naïve Bayes algorithm.",
"title": ""
}
] |
[
{
"docid": "7b6cf139cae3e9dae8a2886ddabcfef0",
"text": "An enhanced automated material handling system (AMHS) that uses a local FOUP buffer at each tool is presented as a method of enabling lot size reduction and parallel metrology sampling in the photolithography (litho) bay. The local FOUP buffers can be integrated with current OHT AMHS systems in existing fabs with little or no change to the AMHS or process equipment. The local buffers enhance the effectiveness of the OHT by eliminating intermediate moves to stockers, increasing the move rate capacity by 15-20%, and decreasing the loadport exchange time to 30 seconds. These enhancements can enable the AMHS to achieve the high move rates compatible with lot size reduction down to 12-15 wafers per FOUP. The implementation of such a system in a photolithography bay could result in a 60-74% reduction in metrology delay time, which is the time between wafer exposure at a litho tool and collection of metrology and inspection data.",
"title": ""
},
{
"docid": "9c9e1458740337c7b074710297a386a8",
"text": "Seed dormancy is an innate seed property that defines the environmental conditions in which the seed is able to germinate. It is determined by genetics with a substantial environmental influence which is mediated, at least in part, by the plant hormones abscisic acid and gibberellins. Not only is the dormancy status influenced by the seed maturation environment, it is also continuously changing with time following shedding in a manner determined by the ambient environment. As dormancy is present throughout the higher plants in all major climatic regions, adaptation has resulted in divergent responses to the environment. Through this adaptation, germination is timed to avoid unfavourable weather for subsequent plant establishment and reproductive growth. In this review, we present an integrated view of the evolution, molecular genetics, physiology, biochemistry, ecology and modelling of seed dormancy mechanisms and their control of germination. We argue that adaptation has taken place on a theme rather than via fundamentally different paths and identify similarities underlying the extensive diversity in the dormancy response to the environment that controls germination.",
"title": ""
},
{
"docid": "d791a5d7a113a5d789452e664669570c",
"text": "Cloud computing is a new way of delivering computing resources and is not a new technology. It is an internet based service delivery model which provides internet based services, computing and storage for users in all markets including financial health care and government. This new economic model for computing has found fertile ground and is attracting massive global investment. Although the benefits of cloud computing are clear, so is the need to develop proper security for cloud implementations. Cloud security is becoming a key differentiator and competitive edge between cloud providers. This paper discusses the security issues that arise in a cloud computing frame work. It focuses on technical security issues arising from the usage of cloud services and also provides an overview of key security issues related to cloud computing with the view of a secure cloud architecture environment.",
"title": ""
},
{
"docid": "463d0bca287f0bd00585b4c96d12d014",
"text": "In this paper, we present a novel approach to extract songlevel descriptors built from frame-level timbral features such as Mel-frequency cepstral coefficient (MFCC). These descriptors are called identity vectors or i-vectors and are the results of a factor analysis procedure applied on framelevel features. The i-vectors provide a low-dimensional and fixed-length representation for each song and can be used in a supervised and unsupervised manner. First, we use the i-vectors for an unsupervised music similarity estimation, where we calculate the distance between i-vectors in order to predict the genre of songs. Second, for a supervised artist classification task we report the performance measures using multiple classifiers trained on the i-vectors. Standard datasets for each task are used to evaluate our method and the results are compared with the state of the art. By only using timbral information, we already achieved the state of the art performance in music similarity (which uses extra information such as rhythm). In artist classification using timbre descriptors, our method outperformed the state of the art.",
"title": ""
},
{
"docid": "c678ea5e9bc8852ec80a8315a004c7f0",
"text": "Educators, researchers, and policy makers have advocated student involvement for some time as an essential aspect of meaningful learning. In the past twenty years engineering educators have implemented several means of better engaging their undergraduate students, including active and cooperative learning, learning communities, service learning, cooperative education, inquiry and problem-based learning, and team projects. This paper focuses on classroom-based pedagogies of engagement, particularly cooperative and problem-based learning. It includes a brief history, theoretical roots, research support, summary of practices, and suggestions for redesigning engineering classes and programs to include more student engagement. The paper also lays out the research ahead for advancing pedagogies aimed at more fully enhancing students’ involvement in their learning.",
"title": ""
},
{
"docid": "a4a4c67e0ca81a099f58146fccc5a2eb",
"text": "Chinese calligraphy is among the finest and most important of all Chinese art forms and an inseparable part of Chinese history. Its delicate aesthetic effects are generally considered to be unique among all calligraphic arts. Its subtle power is integral to traditional Chinese painting. A novel intelligent system uses a constraint-based analogous-reasoning process to automatically generate original Chinese calligraphy that meets visually aesthetic requirements. We propose an intelligent system that can automatically create novel, aesthetically appealing Chinese calligraphy from a few training examples of existing calligraphic styles. To demonstrate the proposed methodology's feasibility, we have implemented a prototype system that automatically generates new Chinese calligraphic art from a small training set.",
"title": ""
},
{
"docid": "87e050b5ae29487cb9cbdbbe672010ea",
"text": "The goal of data mining is to extract or “mine” knowledge from large amounts of data. However, data is often collected by several different sites. Privacy, legal and commercial concerns restrict centralized access to this data, thus derailing data mining projects. Recently, there has been growing focus on finding solutions to this problem. Several algorithms have been proposed that do distributed knowledge discovery, while providing guarantees on the non-disclosure of data. Vertical partitioning of data is an important data distribution model often found in real life. Vertical partitioning or heterogeneous distribution implies that different features of the same set of data are collected by different sites. In this chapter we survey some of the methods developed in the literature to mine vertically partitioned data without violating privacy and discuss challenges and complexities specific to vertical partitioning.",
"title": ""
},
{
"docid": "5fafb56408b75344fe7e55260a758180",
"text": "This paper presents a new conversion method to automatically transform a constituent-based Vietnamese Treebank into dependency trees. On a dependency Treebank created according to our new approach, we examine two stateof-the-art dependency parsers: the MSTParser and the MaltParser. Experiments show that the MSTParser outperforms the MaltParser. To the best of our knowledge, we report the highest performances published to date in the task of dependency parsing for Vietnamese. Particularly, on gold standard POS tags, we get an unlabeled attachment score of 79.08% and a labeled attachment score of 71.66%.",
"title": ""
},
{
"docid": "ac1cf73b0f59279d02611239781af7c1",
"text": "This paper presents V3, an unsupervised system for aspect-based Sentiment Analysis when evaluated on the SemEval 2014 Task 4. V3 focuses on generating a list of aspect terms for a new domain using a collection of raw texts from the domain. We also implement a very basic approach to classify the aspect terms into categories and assign polarities to them.",
"title": ""
},
{
"docid": "99582c5c50f5103f15a6777af94c6584",
"text": "Depth estimation in computer vision and robotics is most commonly done via stereo vision (stereopsis), in which images from two cameras are used to triangulate and estimate distances. However, there are also numerous monocular visual cues— such as texture variations and gradients, defocus, color/haze, etc.—that have heretofore been little exploited in such systems. Some of these cues apply even in regions without texture, where stereo would work poorly. In this paper, we apply a Markov Random Field (MRF) learning algorithm to capture some of these monocular cues, and incorporate them into a stereo system. We show that by adding monocular cues to stereo (triangulation) ones, we obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone. This holds true for a large variety of environments, including both indoor environments and unstructured outdoor environments containing trees/forests, buildings, etc. Our approach is general, and applies to incorporating monocular cues together with any off-the-shelf stereo system.",
"title": ""
},
{
"docid": "32b860121b49bd3a61673b3745b7b1fd",
"text": "Online reviews are a growing market, but it is struggling with fake reviews. They undermine both the value of reviews to the user, and their trust in the review sites. However, fake positive reviews can boost a business, and so a small industry producing fake reviews has developed. The two sides are facing an arms race that involves more and more natural language processing (NLP). So far, NLP has been used mostly for detection, and works well on human-generated reviews. But what happens if NLP techniques are used to generate fake reviews as well? We investigate the question in an adversarial setup, by assessing the detectability of different fake-review generation strategies. We use generative models to produce reviews based on meta-information, and evaluate their effectiveness against deceptiondetection models and human judges. We find that meta-information helps detection, but that NLP-generated reviews conditioned on such information are also much harder to detect than conventional ones.",
"title": ""
},
{
"docid": "196868f85571b16815127d2bd87b98ff",
"text": "Scientists have predicted that carbon’s immediate neighbors on the periodic chart, boron and nitrogen, may also form perfect nanotubes, since the advent of carbon nanotubes (CNTs) in 1991. First proposed then synthesized by researchers at UC Berkeley in the mid 1990’s, the boron nitride nanotube (BNNT) has proven very difficult to make until now. Herein we provide an update on a catalyst-free method for synthesizing highly crystalline, small diameter BNNTs with a high aspect ratio using a high power laser under a high pressure and high temperature environment first discovered jointly by NASA/NIA JSA. Progress in purification methods, dispersion studies, BNNT mat and composite formation, and modeling and diagnostics will also be presented. The white BNNTs offer extraordinary properties including neutron radiation shielding, piezoelectricity, thermal oxidative stability (> 800 ̊C in air), mechanical strength, and toughness. The characteristics of the novel BNNTs and BNNT polymer composites and their potential applications are discussed.",
"title": ""
},
{
"docid": "96fb1910ed0127ad330fd427335b4587",
"text": "OBJECTIVES\nThe aim of this cross-sectional in vivo study was to assess the effect of green tea and honey solutions on the level of salivary Streptococcus mutans.\n\n\nSTUDY DESIGN\nA convenient sample of 30 Saudi boys aged 7-10 years were randomly assigned into 2 groups of 15 each. Saliva sample was collected for analysis of level of S. mutans before rinsing. Commercial honey and green tea were prepared for use and each child was asked to rinse for two minutes using 10 mL of the prepared honey or green tea solutions according to their group. Saliva samples were collected again after rinsing. The collected saliva samples were prepared and colony forming unit (CFU) of S. mutans per mL of saliva was calculated.\n\n\nRESULTS\nThe mean number of S. mutans before and after rinsing with honey and green tea solutions were 2.28* 10(8)(2.622*10(8)), 5.64 *10(7)(1.03*10(8)), 1.17*10(9)(2.012*10(9)) and 2.59*10(8) (3.668*10(8)) respectively. A statistically significant reduction in the average number of S. mutans at baseline and post intervention in the children who were assigned to the honey (P=0.001) and green tea (P=0.001) groups was found.\n\n\nCONCLUSIONS\nA single time mouth rinsing with honey and green tea solutions for two minutes effectively reduced the number of salivary S. mutans of 7-10 years old boys.",
"title": ""
},
{
"docid": "59ef2705492241fbe588e36c77f142bc",
"text": "A reciprocal frame (RF) is a self-supported three-dimensional structure made up of three or more sloping rods, which form a closed circuit, namely an RF-unit. Large RF-structures built as complex grillages of one or a few similar RF-units have an intrinsic beauty derived from their inherent self-similar and highly symmetric patterns. Designing RF-structures that span over large domains is an intricate and complex task. In this paper, we present an interactive computational tool for designing RF-structures over a 3D guiding surface, focusing on the aesthetic aspect of the design.\n There are three key contributions in this work. First, we draw an analogy between RF-structures and plane tiling with regular polygons, and develop a computational scheme to generate coherent RF-tessellations from simple grammar rules. Second, we employ a conformal mapping to lift the 2D tessellation over a 3D guiding surface, allowing a real-time preview and efficient exploration of wide ranges of RF design parameters. Third, we devise an optimization method to guarantee the collinearity of contact joints along each rod, while preserving the geometric properties of the RF-structure. Our tool not only supports the design of wide variety of RF pattern classes and their variations, but also allows preview and refinement through interactive controls.",
"title": ""
},
{
"docid": "1a9fc19eb416eebdbfe1110c37e0852b",
"text": "Two important aspects of switched-mode (Class-D) amplifiers providing a high signal to noise ratio (SNR) for mechatronic applications are investigated. Signal jitter is common in digital systems and introduces noise, leading to a deterioration of the SNR. Hence, a jitter elimination technique for the transistor gate signals in power electronic converters is presented and verified. Jitter is reduced tenfold as compared to traditional approaches to values of 25 ps at the output of the power stage. Additionally, digital modulators used for the generation of the switch control signals can only achieve a limited resolution (and hence, limited SNR) due to timing constraints in digital circuits. Consequently, a specialized modulator structure based on noise shaping is presented and optimized which enables the creation of high-resolution switch control signals. This, together with the jitter reduction circuit, enables half-bridge output voltage SNR values of more than 100dB in an open-loop system.",
"title": ""
},
{
"docid": "6c68bccf376da1f963aaa8ec5e08b646",
"text": "The composition of the gut microbiota is in constant flow under the influence of factors such as the diet, ingested drugs, the intestinal mucosa, the immune system, and the microbiota itself. Natural variations in the gut microbiota can deteriorate to a state of dysbiosis when stress conditions rapidly decrease microbial diversity and promote the expansion of specific bacterial taxa. The mechanisms underlying intestinal dysbiosis often remain unclear given that combinations of natural variations and stress factors mediate cascades of destabilizing events. Oxidative stress, bacteriophages induction and the secretion of bacterial toxins can trigger rapid shifts among intestinal microbial groups thereby yielding dysbiosis. A multitude of diseases including inflammatory bowel diseases but also metabolic disorders such as obesity and diabetes type II are associated with intestinal dysbiosis. The characterization of the changes leading to intestinal dysbiosis and the identification of the microbial taxa contributing to pathological effects are essential prerequisites to better understand the impact of the microbiota on health and disease.",
"title": ""
},
{
"docid": "4124c4c838d0c876f527c021a2c58358",
"text": "Early disease detection is a major challenge in agriculture field. Hence proper measures has to be taken to fight bioagressors of crops while minimizing the use of pesticides. The techniques of machine vision are extensively applied to agricultural science, and it has great perspective especially in the plant protection field,which ultimately leads to crops management. Our goal is early detection of bioagressors. The paper describes a software prototype system for pest detection on the infected images of different leaves. Images of the infected leaf are captured by digital camera and processed using image growing, image segmentation techniques to detect infected parts of the particular plants. Then the detected part is been processed for futher feature extraction which gives general idea about pests. This proposes automatic detection and calculating area of infection on leaves of a whitefly (Trialeurodes vaporariorum Westwood) at a mature stage.",
"title": ""
},
{
"docid": "36911701bcf6029eb796bac182e5aa4c",
"text": "In this paper, we describe the approaches taken in the 4WARD project to address the challenges of the network of the future. Our main hypothesis is that the Future Internet must allow for the fast creation of diverse network designs and paradigms, and must also support their co-existence at run-time. We observe that a pure evolutionary path from the current Internet design will not be able to address, in a satisfactory manner, major issues like the handling of mobile users, information access and delivery, wide area sensor network applications, high management complexity, and malicious traffic that hamper network performance already today. Moreover, the Internetpsilas focus on interconnecting hosts and delivering bits has to be replaced by a more holistic vision of a network of information and content. This is a natural evolution of scope requiring nonetheless a re-design of the architecture. We describe how 4WARD directs research on network virtualisation, novel InNetworkManagement, a generic path concept, and an information centric approach, into a single framework for a diversified, but interoperable, network of the future.",
"title": ""
},
{
"docid": "d8c5ff196db9acbea12e923b2dcef276",
"text": "MoS<sub>2</sub>-graphene-based hybrid structures are biocompatible and useful in the field of biosensors. Herein, we propose a heterostructured MoS<sub>2</sub>/aluminum (Al) film/MoS<sub>2</sub>/graphene as a highly sensitive surface plasmon resonance (SPR) biosensor based on the Otto configuration. The sensitivity of the proposed biosensor is enhanced by using three methods. First, prisms of different refractive index have been discussed and it is found that sensitivity can be enhanced by using a low refractive index prism. Second, the influence of the thickness of the air layer on the sensitivity is analyzed and the optimal thickness of air is obtained. Finally, the sensitivity improvement and mechanism by using molybdenum disulfide (MoS<sub>2</sub>)–graphene hybrid structure is revealed. The maximum sensitivity ∼ 190.83°/RIU is obtained with six layers of MoS<sub>2</sub> coating on both surfaces of Al thin film.",
"title": ""
},
{
"docid": "9b49a4673456ab8e9f14a0fe5fb8bcc7",
"text": "Legged robots offer the potential to navigate a wide variety of terrains that are inaccessible to wheeled vehicles. In this paper we consider the planning and control tasks of navigating a quadruped robot over a wide variety of challenging terrain, including terrain which it has not seen until run-time. We present a software architecture that makes use of both static and dynamic gaits, as well as specialized dynamic maneuvers, to accomplish this task. Throughout the paper we highlight two themes that have been central to our approach: 1) the prevalent use of learning algorithms, and 2) a focus on rapid recovery and replanning techniques; we present several novel methods and algorithms that we developed for the quadruped and that illustrate these two themes. We evaluate the performance of these different methods, and also present and discuss the performance of our system on the official Learning Locomotion tests.",
"title": ""
}
] |
scidocsrr
|
c1957d49ea08b47f516dcc7f032a3a71
|
Mining evolutionary multi-branch trees from text streams
|
[
{
"docid": "2ecfc909301dcc6241bec2472b4d4135",
"text": "Previous work on text mining has almost exclusively focused on a single stream. However, we often have available multiple text streams indexed by the same set of time points (called coordinated text streams), which offer new opportunities for text mining. For example, when a major event happens, all the news articles published by different agencies in different languages tend to cover the same event for a certain period, exhibiting a correlated bursty topic pattern in all the news article streams. In general, mining correlated bursty topic patterns from coordinated text streams can reveal interesting latent associations or events behind these streams. In this paper, we define and study this novel text mining problem. We propose a general probabilistic algorithm which can effectively discover correlated bursty patterns and their bursty periods across text streams even if the streams have completely different vocabularies (e.g., English vs Chinese). Evaluation of the proposed method on a news data set and a literature data set shows that it can effectively discover quite meaningful topic patterns from both data sets: the patterns discovered from the news data set accurately reveal the major common events covered in the two streams of news articles (in English and Chinese, respectively), while the patterns discovered from two database publication streams match well with the major research paradigm shifts in database research. Since the proposed method is general and does not require the streams to share vocabulary, it can be applied to any coordinated text streams to discover correlated topic patterns that burst in multiple streams in the same period.",
"title": ""
}
] |
[
{
"docid": "5d318e2df97f539e227f0aef60d0732b",
"text": "The concept of intuition has, until recently, received scant scholarly attention within and beyond the psychological sciences, despite its potential to unify a number of lines of inquiry. Presently, the literature on intuition is conceptually underdeveloped and dispersed across a range of domains of application, from education, to management, to health. In this article, we clarify and distinguish intuition from related constructs, such as insight, and review a number of theoretical models that attempt to unify cognition and affect. Intuition's place within a broader conceptual framework that distinguishes between two fundamental types of human information processing is explored. We examine recent evidence from the field of social cognitive neuroscience that identifies the potential neural correlates of these separate systems and conclude by identifying a number of theoretical and methodological challenges associated with the valid and reliable assessment of intuition as a basis for future research in this burgeoning field of inquiry.",
"title": ""
},
{
"docid": "942be0aa4dab5904139919351d6d63d4",
"text": "Since Hinton and Salakhutdinov published their landmark science paper in 2006 ending the previous neural-network winter, research in neural networks has increased dramatically. Researchers have applied neural networks seemingly successfully to various topics in the field of computer science. However, there is a risk that we overlook other methods. Therefore, we take a recent end-to-end neural-network-based work (Dhingra et al., 2018) as a starting point and contrast this work with more classical techniques. This prior work focuses on the LAMBADA word prediction task, where broad context is used to predict the last word of a sentence. It is often assumed that neural networks are good at such tasks where feature extraction is important. We show that with simpler syntactic and semantic features (e.g. Across Sentence Boundary (ASB) N-grams) a state-ofthe-art neural network can be outperformed. Our discriminative language-model-based approach improves the word prediction accuracy from 55.6% to 58.9% on the LAMBADA task. As a next step, we plan to extend this work to other language modeling tasks.",
"title": ""
},
{
"docid": "7d0ebf939deed43253d5360e325c3e8e",
"text": "Roughly speaking, clustering evolving networks aims at detecting structurally dense subgroups in networks that evolve over time. This implies that the subgroups we seek for also evolve, which results in many additional tasks compared to clustering static networks. We discuss these additional tasks and difficulties resulting thereof and present an overview on current approaches to solve these problems. We focus on clustering approaches in online scenarios, i.e., approaches that incrementally use structural information from previous time steps in order to incorporate temporal smoothness or to achieve low running time. Moreover, we describe a collection of real world networks and generators for synthetic data that are often used for evaluation.",
"title": ""
},
{
"docid": "78e3d9bbfc9fdd9c3454c34f09e5abd4",
"text": "This paper presents the first ever reported implementation of the Gapped Basic Local Alignment Search Tool (Gapped BLAST) for biological sequence alignment, with the Two-Hit method, on CUDA (compute unified device architecture)-compatible Graphic Processing Units (GPUs). The latter have recently emerged as relatively low cost and easy to program high performance platforms for general purpose computing. Our Gapped BLAST implementation on an NVIDIA Geforce 8800 GTX GPU is up to 2.7x quicker than the most optimized CPU-based implementation, namely NCBI BLAST, running on a Pentium4 3.4 GHz desktop computer with 2GB RAM.",
"title": ""
},
{
"docid": "846f8f33181c3143bb8f54ce8eb3e5cc",
"text": "Story Point is a relative measure heavily used for agile estimation of size. The team decides how big a point is, and based on that size, determines how many points each work item is. In many organizations, the use of story points for similar features can vary from team to another, and successfully, based on the teams' sizes, skill set and relative use of this tool. But in a CMMI organization, this technique demands a degree of consistency across teams for a more streamlined approach to solution delivery. This generates a challenge for CMMI organizations to adopt Agile in software estimation and planning. In this paper, a process and methodology that guarantees relativity in software sizing while using agile story points is introduced. The proposed process and methodology are applied in a CMMI company level three on different projects. By that, the story point is used on the level of the organization, not the project. Then, the performance of sizing process is measured to show a significant improvement in sizing accuracy after adopting the agile story point in CMMI organizations. To complete the estimation cycle, an improvement in effort estimation dependent on story point is also introduced, and its performance effect is measured.",
"title": ""
},
{
"docid": "44f831d346d42fd39bab3f577e6feec4",
"text": "We propose a training framework for sequence-to-sequence voice conversion (SVC). A well-known problem regarding a conventional VC framework is that acoustic-feature sequences generated from a converter tend to be over-smoothed, resulting in buzzy-sounding speech. This is because a particular form of similarity metric or distribution for parameter training of the acoustic model is assumed so that the generated feature sequence that averagely fits the training target example is considered optimal. This over-smoothing occurs as long as a manually constructed similarity metric is used. To overcome this limitation, our proposed SVC framework uses a similarity metric implicitly derived from a generative adversarial network, enabling the measurement of the distance in the high-level abstract space. This would enable the model to mitigate the oversmoothing problem caused in the low-level data space. Furthermore, we use convolutional neural networks to model the long-range context-dependencies. This also enables the similarity metric to have a shift-invariant property; thus, making the model robust against misalignment errors involved in the parallel data. We tested our framework on a non-native-to-native VC task. The experimental results revealed that the use of the proposed framework had a certain effect in improving naturalness, clarity, and speaker individuality.",
"title": ""
},
{
"docid": "3c0e132f0738105eb7fff7f73c520ef7",
"text": "Fan-out wafer-level-packaging (FO-WLP) technology gets more and more significant attention with its advantages of small form factor, higher I/O density, cost effective and high performance for wide range application. However, wafer warpage is still one critical issue which is needed to be addressed for successful subsequent processes for FO-WLP packaging. In this study, methodology to reduce wafer warpage of 12\" wafer at different processes was proposed in terms of geometry design, material selection, and process optimization through finite element analysis (FEA) and experiment. Wafer process dependent modeling results were validated by experimental measurement data. Solutions for reducing wafer warpage were recommended. Key parameters were identified based on FEA modeling results: thickness ratio of die to total mold thickness, molding compound and support wafer materials, dielectric material and RDL design.",
"title": ""
},
{
"docid": "7fc49f042770caf691e8bf074605a7ed",
"text": "Human prostate cancer is characterized by multiple gross chromosome alterations involving several chromosome regions. However, the specific genes involved in the development of prostate tumors are still largely unknown. Here we have studied the chromosome composition of the three established prostate cancer cell lines, LNCaP, PC-3, and DU145, by spectral karyotyping (SKY). SKY analysis showed complex karyotypes for all three cell lines, with 87, 58/113, and 62 chromosomes, respectively. All cell lines were shown to carry structural alterations of chromosomes 1, 2, 4, 6, 10, 15, and 16; however, no recurrent breakpoints were detected. Compared to previously published findings on these cell lines using comparative genomic hybridization, SKY revealed several balanced translocations and pinpointed rearrangement breakpoints. The SKY analysis was validated by fluorescence in situ hybridization using chromosome-specific, as well as locus-specific, probes. Identification of chromosome alterations in these cell lines by SKY may prove to be helpful in attempts to clone the genes involved in prostate cancer tumorigenesis.",
"title": ""
},
{
"docid": "1569bcea0c166d9bf2526789514609c5",
"text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.",
"title": ""
},
{
"docid": "92b2b7fb95624a187f5304c882d31dca",
"text": "Automatically predicting human eye fixations is a useful technique that can facilitate many multimedia applications, e.g., image retrieval, action recognition, and photo retargeting. Conventional approaches are frustrated by two drawbacks. First, psychophysical experiments show that an object-level interpretation of scenes influences eye movements significantly. Most of the existing saliency models rely on object detectors, and therefore, only a few prespecified categories can be discovered. Second, the relative displacement of objects influences their saliency remarkably, but current models cannot describe them explicitly. To solve these problems, this paper proposes weakly supervised fixations prediction, which leverages image labels to improve accuracy of human fixations prediction. The proposed model hierarchically discovers objects as well as their spatial configurations. Starting from the raw image pixels, we sample superpixels in an image, thereby seamless object descriptors termed object-level graphlets (oGLs) are generated by random walking on the superpixel mosaic. Then, a manifold embedding algorithm is proposed to encode image labels into oGLs, and the response map of each prespecified object is computed accordingly. On the basis of the object-level response map, we propose spatial-level graphlets (sGLs) to model the relative positions among objects. Afterward, eye tracking data is employed to integrate these sGLs for predicting human eye fixations. Thorough experiment results demonstrate the advantage of the proposed method over the state-of-the-art.",
"title": ""
},
{
"docid": "352c61af854ffc6dab438e7a1be56fcb",
"text": "Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children’s cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.",
"title": ""
},
{
"docid": "63ed24b818f83ab04160b5c690075aac",
"text": "In this paper, we discuss the impact of digital control in high-frequency switched-mode power supplies (SMPS), including point-of-load and isolated DC-DC converters, microprocessor power supplies, power-factor-correction rectifiers, electronic ballasts, etc., where switching frequencies are typically in the hundreds of kHz to MHz range, and where high efficiency, static and dynamic regulation, low size and weight, as well as low controller complexity and cost are very important. To meet these application requirements, a digital SMPS controller may include fast, small analog-to-digital converters, hardware-accelerated programmable compensators, programmable digital modulators with very fine time resolution, and a standard microcontroller core to perform programming, monitoring and other system interface tasks. Based on recent advances in circuit and control techniques, together with rapid advances in digital VLSI technology, we conclude that high-performance digital controller solutions are both feasible and practical, leading to much enhanced system integration and performance gains. Examples of experimentally demonstrated results are presented, together with pointers to areas of current and future research and development.",
"title": ""
},
{
"docid": "84c37ea2545042a2654b162491846628",
"text": "Ever since the agile manifesto was created in 2001, the research community has devoted a great deal of attention to agile software development. This article examines publications and citations to illustrate how the research on agile has progressed in the 10 years following the articulation of the manifesto. nformation systems Xtreme programming, XP",
"title": ""
},
{
"docid": "1203f22bfdfc9ecd211dbd79a2043a6a",
"text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.",
"title": ""
},
{
"docid": "3177e9dd683fdc66cbca3bd985f694b1",
"text": "Online communities allow millions of people who would never meet in person to interact. People join web-based discussion boards, email lists, and chat rooms for friendship, social support, entertainment, and information on technical, health, and leisure activities [24]. And they do so in droves. One of the earliest networks of online communities, Usenet, had over nine million unique contributors, 250 million messages, and approximately 200,000 active groups in 2003 [27], while the newer MySpace, founded in 2003, attracts a quarter million new members every day [27].",
"title": ""
},
{
"docid": "c450ac5c84d962bb7f2262cf48e1280a",
"text": "Animal-assisted therapies have become widespread with programs targeting a variety of pathologies and populations. Despite its popularity, it is unclear if this therapy is useful. The aim of this systematic review is to establish the efficacy of Animal assisted therapies in the management of dementia, depression and other conditions in adult population. A search was conducted in MEDLINE, EMBASE, CINAHL, LILACS, ScienceDirect, and Taylor and Francis, OpenGrey, GreyLiteratureReport, ProQuest, and DIALNET. No language or study type filters were applied. Conditions studied included depression, dementia, multiple sclerosis, PTSD, stroke, spinal cord injury, and schizophrenia. Only articles published after the year 2000 using therapies with significant animal involvement were included. 23 articles and dissertations met inclusion criteria. Overall quality was low. The degree of animal interaction significantly influenced outcomes. Results are generally favorable, but more thorough and standardized research should be done to strengthen the existing evidence.",
"title": ""
},
{
"docid": "6a2c7d43cde643f295ace71f5681285f",
"text": "Quantum mechanics and information theory are among the most important scientific discoveries of the last century. Although these two areas initially developed separately, it has emerged that they are in fact intimately related. In this review the author shows how quantum information theory extends traditional information theory by exploring the limits imposed by quantum, rather than classical, mechanics on information storage and transmission. The derivation of many key results differentiates this review from the usual presentation in that they are shown to follow logically from one crucial property of relative entropy. Within the review, optimal bounds on the enhanced speed that quantum computers can achieve over their classical counterparts are outlined using information-theoretic arguments. In addition, important implications of quantum information theory for thermodynamics and quantum measurement are intermittently discussed. A number of simple examples and derivations, including quantum superdense coding, quantum teleportation, and Deutsch’s and Grover’s algorithms, are also included.",
"title": ""
},
{
"docid": "95c4a2cfd063abdac35572927c4dcfc1",
"text": "Community detection is an important task in network analysis. A community (also referred to as a cluster) is a set of cohesive vertices that have more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. In this paper, we propose an efficient overlapping community detection algorithm using a seed expansion approach. The key idea of our algorithm is to find good seeds, and then greedily expand these seeds based on a community metric. Within this seed expansion method, we investigate the problem of how to determine good seed nodes in a graph. In particular, we develop new seeding strategies for a personalized PageRank clustering scheme that optimizes the conductance community score. An important step in our method is the neighborhood inflation step where seeds are modified to represent their entire vertex neighborhood. Experimental results show that our seed expansion algorithm outperforms other state-of-the-art overlapping community detection methods in terms of producing cohesive clusters and identifying ground-truth communities. We also show that our new seeding strategies are better than existing strategies, and are thus effective in finding good overlapping communities in real-world networks.",
"title": ""
},
{
"docid": "646572f76cffd3ba225105d6647a588f",
"text": "Context: Cyber-physical systems (CPSs) have emerged to be the next generation of engineered systems driving the so-called fourth industrial revolution. CPSs are becoming more complex, open and more prone to security threats, which urges security to be engineered systematically into CPSs. Model-Based Security Engineering (MBSE) could be a key means to tackle this challenge via security by design, abstraction, and",
"title": ""
}
] |
scidocsrr
|
8d0f890590d41d3e24f7463ed329ccad
|
Blockchain-Based Database to Ensure Data Integrity in Cloud Computing Environments
|
[
{
"docid": "016a07d2ddb55149708409c4c62c67e3",
"text": "Cloud computing has emerged as a computational paradigm and an alternative to the conventional computing with the aim of providing reliable, resilient infrastructure, and with high quality of services for cloud users in both academic and business environments. However, the outsourced data in the cloud and the computation results are not always trustworthy because of the lack of physical possession and control over the data for data owners as a result of using to virtualization, replication and migration techniques. Since that the security protection the threats to outsourced data have become a very challenging and potentially formidable task in cloud computing, many researchers have focused on ameliorating this problem and enabling public auditability for cloud data storage security using remote data auditing (RDA) techniques. This paper presents a comprehensive survey on the remote data storage auditing in single cloud server domain and presents taxonomy of RDA approaches. The objective of this paper is to highlight issues and challenges to current RDA protocols in the cloud and the mobile cloud computing. We discuss the thematic taxonomy of RDA based on significant parameters such as security requirements, security metrics, security level, auditing mode, and update mode. The state-of-the-art RDA approaches that have not received much coverage in the literature are also critically analyzed and classified into three groups of provable data possession, proof of retrievability, and proof of ownership to present a taxonomy. It also investigates similarities and differences in such framework and discusses open research issues as the future directions in RDA research. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
},
{
"docid": "b76e466d4b446760bf3fd5d70e2edc1b",
"text": "Cloud computing has emerged as a long-dreamt vision of the utility computing paradigm that provides reliable and resilient infrastructure for users to remotely store data and use on-demand applications and services. Currently, many individuals and organizations mitigate the burden of local data storage and reduce the maintenance cost by outsourcing data to the cloud. However, the outsourced data is not always trustworthy due to the loss of physical control and possession over the data. As a result, many scholars have concentrated on relieving the security threats of the outsourced data by designing the Remote Data Auditing (RDA) technique as a new concept to enable public auditability for the stored data in the cloud. The RDA is a useful technique to check the reliability and integrity of data outsourced to a single or distributed servers. This is because all of the RDA techniques for single cloud servers are unable to support data recovery; such techniques are complemented with redundant storage mechanisms. The article also reviews techniques of remote data auditing more comprehensively in the domain of the distributed clouds in conjunction with the presentation of classifying ongoing developments within this specified area. The thematic taxonomy of the distributed storage auditing is presented based on significant parameters, such as scheme nature, security pattern, objective functions, auditing mode, update mode, cryptography model, and dynamic data structure. The more recent remote auditing approaches, which have not gained considerable attention in distributed cloud environments, are also critically analyzed and further categorized into three different classes, namely, replication based, erasure coding based, and network coding based, to present a taxonomy. This survey also aims to investigate similarities and differences of such a framework on the basis of the thematic taxonomy to diagnose significant and explore major outstanding issues.",
"title": ""
}
] |
[
{
"docid": "8fcb30825553e58ff66fd85ded10111e",
"text": "Most ecological processes now show responses to anthropogenic climate change. In terrestrial, freshwater, and marine ecosystems, species are changing genetically, physiologically, morphologically, and phenologically and are shifting their distributions, which affects food webs and results in new interactions. Disruptions scale from the gene to the ecosystem and have documented consequences for people, including unpredictable fisheries and crop yields, loss of genetic diversity in wild crop varieties, and increasing impacts of pests and diseases. In addition to the more easily observed changes, such as shifts in flowering phenology, we argue that many hidden dynamics, such as genetic changes, are also taking place. Understanding shifts in ecological processes can guide human adaptation strategies. In addition to reducing greenhouse gases, climate action and policy must therefore focus equally on strategies that safeguard biodiversity and ecosystems.",
"title": ""
},
{
"docid": "6c04e51492224fa3dc2c5bbf6608266b",
"text": "In many applications, one can obtain descriptions about the same objects or events from a variety of sources. As a result, this will inevitably lead to data or information conflicts. One important problem is to identify the true information (i.e., the truths) among conflicting sources of data. It is intuitive to trust reliable sources more when deriving the truths, but it is usually unknown which one is more reliable a priori. Moreover, each source possesses a variety of properties with different data types. An accurate estimation of source reliability has to be made by modeling multiple properties in a unified model. Existing conflict resolution work either does not conduct source reliability estimation, or models multiple properties separately. In this paper, we propose to resolve conflicts among multiple sources of heterogeneous data types. We model the problem using an optimization framework where truths and source reliability are defined as two sets of unknown variables. The objective is to minimize the overall weighted deviation between the truths and the multi-source observations where each source is weighted by its reliability. Different loss functions can be incorporated into this framework to recognize the characteristics of various data types, and efficient computation approaches are developed. Experiments on real-world weather, stock and flight data as well as simulated multi-source data demonstrate the necessity of jointly modeling different data types in the proposed framework.",
"title": ""
},
{
"docid": "644de61e0da130aafcd65691a8e1f47a",
"text": "We report on the first implementation of a single photon avalanche diode (SPAD) in 130 nm complementary metal-oxide-semiconductor (CMOS) technology. The SPAD is fabricated as p+/n-well junction with octagonal shape. A guard ring of p-well around the p+ anode is used to prevent premature discharge. To investigate the dynamics of the new device, both active and passive quenching methods have been used. Single photon detection is achieved by sensing the avalanche using a fast comparator. The SPAD exhibits a maximum photon detection probability of 41% and a typical dark count rate of 100 kHz at room temperature. Thanks to its timing resolution of 144 ps full-width at half-maximum (FWHM), the SPAD has several uses in disparate disciplines, including medical imaging, 3D vision, biophotonics, low-light illumination imaging, etc.",
"title": ""
},
{
"docid": "8f70026ff59ed1ae54ab5b6dadd2a3da",
"text": "Exoskeleton suit is a kind of human-machine robot, which combines the humans intelligence with the powerful energy of mechanism. It can help people to carry heavy load, walking on kinds of terrains and have a broadly apply area. Though many exoskeleton suits has been developed, there need many complex sensors between the pilot and the exoskeleton system, which decrease the comfort of the pilot. Sensitivity amplification control (SAC) is a method applied in exoskeleton system without any sensors between the pilot and the exoskeleton. In this paper simulation research was made to verify the feasibility of SAC include a simple 1-dof model and a swing phase model of 3-dof. A PID controller was taken to describe the human-machine interface model. Simulation results show the human only need to exert a scale-down version torque compared with the actuator and decrease the power consumes of the pilot.",
"title": ""
},
{
"docid": "5407b8e976d7e6e1d7aa1e00c278a400",
"text": "In his paper a 7T SRAM cell operating well in low voltages is presented. Suitable read operation structure is provided by controlling the drain induced barrier lowering (DIBL) effect and body-source voltage in the hold `1' state. The read-operation structure of the proposed cell utilizes the single transistor which leads to a larger write margin. The simulation results at 90nm TSMC CMOS demonstrate the outperforms of the proposed SRAM cell in terms of power dissipation, write margin, sensitivity to process variations as compared with the other most efficient low-voltage SRAM cells.",
"title": ""
},
{
"docid": "73e27f751c8027bac694f2e876d4d910",
"text": "The numerous and diverse applications of the Internet of Things (IoT) have the potential to change all areas of daily life of individuals, businesses, and society as a whole. The vision of a pervasive IoT spans a wide range of application domains and addresses the enabling technologies needed to meet the performance requirements of various IoT applications. In order to accomplish this vision, this paper aims to provide an analysis of literature in order to propose a new classification of IoT applications, specify and prioritize performance requirements of such IoT application classes, and give an insight into state-of-the-art technologies used to meet these requirements, all from telco’s perspective. A deep and comprehensive understanding of the scope and classification of IoT applications is an essential precondition for determining their performance requirements with the overall goal of defining the enabling technologies towards fifth generation (5G) networks, while avoiding over-specification and high costs. Given the fact that this paper presents an overview of current research for the given topic, it also targets the research community and other stakeholders interested in this contemporary and attractive field for the purpose of recognizing research gaps and recommending new research directions.",
"title": ""
},
{
"docid": "ca4100a8c305c064ea8716702859f11b",
"text": "It is widely believed, in the areas of optics, image analysis, and visual perception, that the Hilbert transform does not extend naturally and isotropically beyond one dimension. In some areas of image analysis, this belief has restricted the application of the analytic signal concept to multiple dimensions. We show that, contrary to this view, there is a natural, isotropic, and elegant extension. We develop a novel two-dimensional transform in terms of two multiplicative operators: a spiral phase spectral (Fourier) operator and an orientational phase spatial operator. Combining the two operators results in a meaningful two-dimensional quadrature (or Hilbert) transform. The new transform is applied to the problem of closed fringe pattern demodulation in two dimensions, resulting in a direct solution. The new transform has connections with the Riesz transform of classical harmonic analysis. We consider these connections, as well as others such as the propagation of optical phase singularities and the reconstruction of geomagnetic fields.",
"title": ""
},
{
"docid": "f250e8879618f73d5e23676a96f02e81",
"text": "Brain oscillatory activity is associated with different cognitive processes and plays a critical role in meditation. In this study, we investigated the temporal dynamics of oscillatory changes during Sahaj Samadhi meditation (a concentrative form of meditation that is part of Sudarshan Kriya yoga). EEG was recorded during Sudarshan Kriya yoga meditation for meditators and relaxation for controls. Spectral and coherence analysis was performed for the whole duration as well as specific blocks extracted from the initial, middle, and end portions of Sahaj Samadhi meditation or relaxation. The generation of distinct meditative states of consciousness was marked by distinct changes in spectral powers especially enhanced theta band activity during deep meditation in the frontal areas. Meditators also exhibited increased theta coherence compared to controls. The emergence of the slow frequency waves in the attention-related frontal regions provides strong support to the existing claims of frontal theta in producing meditative states along with trait effects in attentional processing. Interestingly, increased frontal theta activity was accompanied reduced activity (deactivation) in parietal–occipital areas signifying reduction in processing associated with self, space and, time.",
"title": ""
},
{
"docid": "a036dd162a23c5d24125d3270e22aaf7",
"text": "1 Problem Description This work is focused on the relationship between the news articles (breaking news) and stock prices. The student will design and develop methods to analyze how and when the news articles influence the stock market. News articles about Norwegian oil related companies and stock prices from \" BW Offshore Limited \" (BWO), \" DNO International \" (DNO), \" Frontline \" (FRO), \" Petroleum Geo-Services \" (PGS), \" Seadrill \" (SDRL), \" Sevan Marine \" (SEVAN), \" Siem Offshore \" (SIOFF), \" Statoil \" (STL) and \" TGS-NOPEC Geophysical Company \" (TGS) will be crawled, preprocessed and the important features in the text will be extracted to effectively represent the news in a form that allows the application of computational techniques. This data will then be used to train text sense classifiers. A prototype system that employs such classifiers will be developed to support the trader in taking sell/buy decisions. Methods will be developed for automaticall sense-labeling of news that are informed by the correlation between the changes in the stock prices and the breaking news. Performance of the prototype decision support system will be compared with a chosen baseline method for trade-related decision making. Abstract This thesis investigates the prediction of possible stock price changes immediately after news article publications. This is done by automatic analysis of these news articles. Some background information about financial trading theory and text mining is given in addition to an overview of earlier related research in the field of automatic news article analyzes with the purpose of predicting future stock prices. In this thesis a system is designed and implemented to predict stock price trends for the time immediately after the publication of news articles. This system consists mainly of four components. The first component gathers news articles and stock prices automatically from internet. The second component prepares the news articles by sending them to some document preprocessing steps and finding relevant features before they are sent to a document representation process. The third component categorizes the news articles into predefined categories, and finally the fourth component applies appropriate trading strategies depending on the category of the news article. This system requires a labeled data set to train the categorization component. This data set is labeled automatically on the basis of the price trends directly after the news article publication. An additional label refining step using clustering is added in an …",
"title": ""
},
{
"docid": "db7bc8bbfd7dd778b2900973f2cfc18d",
"text": "In this paper, the self-calibration of micromechanical acceleration sensors is considered, specifically, based solely on user-generated movement data without the support of laboratory equipment or external sources. The autocalibration algorithm itself uses the fact that under static conditions, the squared norm of the measured sensor signal should match the magnitude of the gravity vector. The resulting nonlinear optimization problem is solved using robust statistical linearization instead of the common analytical linearization for computing bias and scale factors of the accelerometer. To control the forgetting rate of the calibration algorithm, artificial process noise models are developed and compared with conventional ones. The calibration methodology is tested using arbitrarily captured acceleration profiles of the human daily routine and shows that the developed algorithm can significantly reject any misconfiguration of the acceleration sensor.",
"title": ""
},
{
"docid": "f3a838d6298c8ae127e548ba62e872eb",
"text": "Plasmodium falciparum resistance to artemisinins, the most potent and fastest acting anti-malarials, threatens malaria elimination strategies. Artemisinin resistance is due to mutation of the PfK13 propeller domain and involves an unconventional mechanism based on a quiescence state leading to parasite recrudescence as soon as drug pressure is removed. The enhanced P. falciparum quiescence capacity of artemisinin-resistant parasites results from an increased ability to manage oxidative damage and an altered cell cycle gene regulation within a complex network involving the unfolded protein response, the PI3K/PI3P/AKT pathway, the PfPK4/eIF2α cascade and yet unidentified transcription factor(s), with minimal energetic requirements and fatty acid metabolism maintained in the mitochondrion and apicoplast. The detailed study of these mechanisms offers a way forward for identifying future intervention targets to fend off established artemisinin resistance.",
"title": ""
},
{
"docid": "20563a2f75e074fe2a62a5681167bc01",
"text": "The introduction of a new generation of attractive touch screen-based devices raises many basic usability questions whose answers may influence future design and market direction. With a set of current mobile devices, we conducted three experiments focusing on one of the most basic interaction actions on touch screens: the operation of soft buttons. Issues investigated in this set of experiments include: a comparison of soft button and hard button performance; the impact of audio and vibrato-tactile feedback; the impact of different types of touch sensors on use, behavior, and performance; a quantitative comparison of finger and stylus operation; and an assessment of the impact of soft button sizes below the traditional 22 mm recommendation as well as below finger width.",
"title": ""
},
{
"docid": "cbc2b592efc227a5c6308edfbca51bd6",
"text": "The rapidly growing presence of Internet of Things (IoT) devices is becoming a continuously alluring playground for malicious actors who try to harness their vast numbers and diverse locations. One of their primary goals is to assemble botnets that can serve their nefarious purposes, ranging from Denial of Service (DoS) to spam and advertisement fraud. The most recent example that highlights the severity of the problem is the Mirai family of malware, which is accountable for a plethora of massive DDoS attacks of unprecedented volume and diversity. The aim of this paper is to offer a comprehensive state-of-the-art review of the IoT botnet landscape and the underlying reasons of its success with a particular focus on Mirai and major similar worms. To this end, we provide extensive details on the internal workings of IoT malware, examine their interrelationships, and elaborate on the possible strategies for defending against them.",
"title": ""
},
{
"docid": "5d6bd34fb5fdb44950ec5d98e77219c3",
"text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.",
"title": ""
},
{
"docid": "8d350cc11997b6a0dc96c9fef2b1919f",
"text": "Task-parameterized models of movements aims at automatically adapting movements to new situations encountered by a robot. The task parameters can for example take the form of positions of objects in the environment, or landmark points that the robot should pass through. This tutorial aims at reviewing existing approaches for task-adaptive motion encoding. It then narrows down the scope to the special case of task parameters that take the form of frames of reference, coordinate systems, or basis functions, which are most commonly encountered in service robotics. Each section of the paper is accompanied with source codes designed as simple didactic examples implemented in Matlab with a full compatibility with GNU Octave, closely following the notation and equations of the article. It also presents ongoing work and further challenges that remain to be addressed, with examples provided in simulation and on a real robot (transfer of manipulation behaviors to the Baxter bimanual robot). The repository for the accompanying source codes is available at http://www.idiap.ch/software/pbdlib/.",
"title": ""
},
{
"docid": "6442c9e4eb9034abf90fcd697c32a343",
"text": "With the increasing popularity and demand for mobile applications, there has been a significant increase in the number of mobile application development projects. Highly volatile requirements of mobile applications require adaptive software development methods. The Agile approach is seen as a natural fit for mobile application and there is a need to explore various Agile methodologies for the development of mobile applications. This paper evaluates how adopting various Agile approaches improves the development of mobile applications and if they can be used in order to provide more tailor-made process improvements within an organization. A survey related to mobile application development process improvement was developed. The use of various Agile approaches for success in mobile application development were evaluated by determining the significance of the most used Agile engineering paradigms such as XP, Scrum, and Lean. The findings of the study show that these Agile methods have the potential to help deliver enhanced speed and quality for mobile application development.",
"title": ""
},
{
"docid": "c6f173f75917ee0632a934103ca7566c",
"text": "Mersenne Twister (MT) is a widely-used fast pseudorandom number generator (PRNG) with a long period of 2 − 1, designed 10 years ago based on 32-bit operations. In this decade, CPUs for personal computers have acquired new features, such as Single Instruction Multiple Data (SIMD) operations (i.e., 128bit operations) and multi-stage pipelines. Here we propose a 128-bit based PRNG, named SIMD-oriented Fast Mersenne Twister (SFMT), which is analogous to MT but making full use of these features. Its recursion fits pipeline processing better than MT, and it is roughly twice as fast as optimised MT using SIMD operations. Moreover, the dimension of equidistribution of SFMT is better than MT. We also introduce a block-generation function, which fills an array of 32-bit integers in one call. It speeds up the generation by a factor of two. A speed comparison with other modern generators, such as multiplicative recursive generators, shows an advantage of SFMT. The implemented C-codes are downloadable from http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html.",
"title": ""
},
{
"docid": "77b4cb00c3a72fdeefa99aa504f492d8",
"text": "This article considers a short survey of basic methods of social networks analysis, which are used for detecting cyber threats. The main types of social network threats are presented. Basic methods of graph theory and data mining, that deals with social networks analysis are described. Typical security tasks of social network analysis, such as community detection in network, detection of leaders in communities, detection experts in networks, clustering text information and others are considered.",
"title": ""
},
{
"docid": "3669d58dc1bed1d83e5d0d6747771f0e",
"text": "To cite: He A, Kwatra SG, Kim N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016214761 DESCRIPTION A 26-year-old woman with a reported history of tinea versicolour presented for persistent hypopigmentation on her bilateral forearms. Detailed examination revealed multiple small (5–10 mm), irregularly shaped white macules on the extensor surfaces of the bilateral forearms overlying slightly erythaematous skin. The surrounding erythaematous skin blanched with pressure and with elevation of the upper extremities the white macules were no longer visible (figures 1 and 2). A clinical diagnosis of Bier spots was made based on the patient’s characteristic clinical features. Bier spots are completely asymptomatic and are often found on the extensor surfaces of the upper and lower extremities, although they are sometimes generalised. They are a benign physiological vascular anomaly, arising either from cutaneous vessels responding to venous hypertension or from small vessel vasoconstriction leading to tissue hypoxia. 3 Our patient had neither personal nor family history of vascular disease. Bier spots are easily diagnosed by a classic sign on physical examination: the pale macules disappear with pressure applied on the surrounding skin or by elevating the affected limbs (figure 2). However, Bier spots can be easily confused with a variety of other disorders associated with hypopigmented macules. The differential diagnosis includes vitiligo, postinflammatory hypopigmentation and tinea versicolour, which was a prior diagnosis in this case. Bier spots are often idiopathic and regress spontaneously, although there are reports of Bier spots heralding systemic diseases, such as scleroderma renal crisis, mixed cryoglobulinaemia or lymphoma. Since most Bier spots are idiopathic and transient, no treatment is required.",
"title": ""
},
{
"docid": "f331337a19cff2cf29e89a87d7ab234f",
"text": "This paper presents an investigation of lexical chaining (Morris and Hirst, 1991) for measuring discourse coherence quality in test-taker essays. We hypothesize that attributes of lexical chains, as well as interactions between lexical chains and explicit discourse elements, can be harnessed for representing coherence. Our experiments reveal that performance achieved by our new lexical chain features is better than that of previous discourse features used for this task, and that the best system performance is achieved when combining lexical chaining features with complementary discourse features, such as those provided by a discourse parser based on rhetorical structure theory, and features that reflect errors in grammar, word usage, and mechanics.",
"title": ""
}
] |
scidocsrr
|
936f998587b76ff8b57021398cccb750
|
How Software Project Risk Affects Project Performance: An Investigation of the Dimensions of Risk and an Exploratory Model
|
[
{
"docid": "4506bc1be6e7b42abc34d79dc426688a",
"text": "The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research suggests the need to compare and contrast different types of SEM techniques so that research designs can be selected appropriately. After assessing the extent to which these techniques are currently being used in IS research, the article presents a running example which analyzes the same dataset via three very different statistical techniques. It then compares two classes of SEM: covariance-based SEM and partial-least-squaresbased SEM. Finally, the article discusses linear regression models and offers guidelines as to when SEM techniques and when regression techniques should be used. The article concludes with heuristics and rule of thumb thresholds to guide practice, and a discussion of the extent to which practice is in accord with these guidelines.",
"title": ""
},
{
"docid": "02b6bcef39a21b14ce327f3dc9671fef",
"text": "We've all heard tales of multimillion dollar mistakes that somehow ran off course. Are software projects that risky or do managers need to take a fresh approach when preparing for such critical expeditions? Software projects are notoriously difficult to manage and too many of them end in failure. In 1995, annual U.S. spending on software projects reached approximately $250 billion and encompassed an estimated 175,000 projects [6]. Despite the costs involved, press reports suggest that project failures are occurring with alarming frequency. In 1995, U.S companies alone spent an estimated $59 billion in cost overruns on IS projects and another $81 billion on canceled software projects [6]. One explanation for the high failure rate is that managers are not taking prudent measures to assess and manage the risks involved in these projects. is Advocates of software project risk management claim that by countering these threats to success, the incidence of failure can be reduced [4, 5]. Before we can develop meaningful risk management strategies, however, we must identify these risks. Furthermore, the relative importance of these risks needs to be established, along with some understanding as to why certain risks are perceived to be more important than others. This is necessary so that managerial attention can be focused on the areas that constitute the greatest threats. Finally, identified risks must be classified in a way that suggests meaningful risk mitigation strategies. Here, we report the results of a Delphi study in which experienced software project managers identified and ranked the most important risks. The study led not only to the identification of risk factors and their relative importance, but also to novel insights into why project managers might view certain risks as being more important than others. Based on these insights, we introduce a framework for classifying software project risks and discuss appropriate strategies for managing each type of risk. Since the 1970s, both academics and practitioners have written about risks associated with managing software projects [1, 2, 4, 5, 7, 8]. Unfortunately , much of what has been written on risk is based either on anecdotal evidence or on studies limited to a narrow portion of the development process. Moreover, no systematic attempts have been made to identify software project risks by tapping the opinions of those who actually have experience in managing such projects. With a few exceptions [3, 8], there has been little attempt to understand the …",
"title": ""
}
] |
[
{
"docid": "24167db00908c65558e8034d94dfb8da",
"text": "Due to the wide variety of devices used in computer network systems, cybersecurity plays a major role in securing and improving the performance of the network or system. Although cybersecurity has received a large amount of global interest in recent years, it remains an open research space. Current security solutions in network-based cyberspace provide an open door to attackers by communicating first before authentication, thereby leaving a black hole for an attacker to enter the system before authentication. This article provides an overview of cyberthreats, traditional security solutions, and the advanced security model to overcome current security drawbacks.",
"title": ""
},
{
"docid": "fedbeb9d39ce91c96d93e05b5856f09e",
"text": "Devices for continuous glucose monitoring (CGM) are currently a major focus of research in the area of diabetes management. It is envisioned that such devices will have the ability to alert a diabetes patient (or the parent or medical care giver of a diabetes patient) of impending hypoglycemic/hyperglycemic events and thereby enable the patient to avoid extreme hypoglycemic/hyperglycemic excursions as well as minimize deviations outside the normal glucose range, thus preventing both life-threatening events and the debilitating complications associated with diabetes. It is anticipated that CGM devices will utilize constant feedback of analytical information from a glucose sensor to activate an insulin delivery pump, thereby ultimately realizing the concept of an artificial pancreas. Depending on whether the CGM device penetrates/breaks the skin and/or the sample is measured extracorporeally, these devices can be categorized as totally invasive, minimally invasive, and noninvasive. In addition, CGM devices are further classified according to the transduction mechanisms used for glucose sensing (i.e., electrochemical, optical, and piezoelectric). However, at present, most of these technologies are plagued by a variety of issues that affect their accuracy and long-term performance. This article presents a critical comparison of existing CGM technologies, highlighting critical issues of device accuracy, foreign body response, calibration, and miniaturization. An outlook on future developments with an emphasis on long-term reliability and performance is also presented.",
"title": ""
},
{
"docid": "0d43f72f92a73b648edd2dc3d1f0d141",
"text": "While egocentric video is becoming increasingly popular, browsing it is very difficult. In this paper we present a compact 3D Convolutional Neural Network (CNN) architecture for long-term activity recognition in egocentric videos. Recognizing long-term activities enables us to temporally segment (index) long and unstructured egocentric videos. Existing methods for this task are based on hand tuned features derived from visible objects, location of hands, as well as optical flow. Given a sparse optical flow volume as input, our CNN classifies the camera wearer's activity. We obtain classification accuracy of 89%, which outperforms the current state-of-the-art by 19%. Additional evaluation is performed on an extended egocentric video dataset, classifying twice the amount of categories than current state-of-the-art. Furthermore, our CNN is able to recognize whether a video is egocentric or not with 99.2% accuracy, up by 24% from current state-of-the-art. To better understand what the network actually learns, we propose a novel visualization of CNN kernels as flow fields.",
"title": ""
},
{
"docid": "d40a1b72029bdc8e00737ef84fdf5681",
"text": "— Ability of deep networks to extract high level features and of recurrent networks to perform time-series inference have been studied. In view of universality of one hidden layer network at approximating functions under weak constraints, the benefit of multiple layers is to enlarge the space of dynamical systems approximated or, given the space, reduce the number of units required for a certain error. Traditionally shallow networks with manually engineered features are used, back-propagation extent is limited to one and attempt to choose a large number of hidden units to satisfy the Markov condition is made. In case of Markov models, it has been shown that many systems need to be modeled as higher order. In the present work, we present deep recurrent networks with longer back-propagation through time extent as a solution to modeling systems that are high order and to predicting ahead. We study epileptic seizure suppression electro-stimulator. Extraction of manually engineered complex features and prediction employing them has not allowed small low-power implementations as, to avoid possibility of surgery, extraction of any features that may be required has to be included. In this solution, a recurrent neural network performs both feature extraction and prediction. We prove analytically that adding hidden layers or increasing backpropagation extent increases the rate of decrease of approximation error. A Dynamic Programming (DP) training procedure employing matrix operations is derived. DP and use of matrix operations makes the procedure efficient particularly when using data-parallel computing. The simulation studies show the geometry of the parameter space, that the network learns the temporal structure, that parameters converge while model output displays same dynamic behavior as the system and greater than .99 Average Detection Rate on all real seizure data tried.",
"title": ""
},
{
"docid": "196ddcefb2c3fcb6edd5e8d108f7e219",
"text": "This paper may be considered as a practical reference for those who wish to add (now sufficiently matured) Agent Based modeling to their analysis toolkit and may or may not have some System Dynamics or Discrete Event modeling background. We focus on systems that contain large numbers of active objects (people, business units, animals, vehicles, or even things like projects, stocks, products, etc. that have timing, event ordering or other kind of individual behavior associated with them). We compare the three major paradigms in simulation modeling: System Dynamics, Discrete Event and Agent Based Modeling with respect to how they approach such systems. We show in detail how an Agent Based model can be built from an existing System Dynamics or a Discrete Event model and then show how easily it can be further enhanced to capture much more complicated behavior, dependencies and interactions thus providing for deeper insight in the system being modeled. Commonly understood examples are used throughout the paper; all models are specified in the visual language supported by AnyLogic tool. We view and present Agent Based modeling not as a substitution to older modeling paradigms but as a useful add-on that can be efficiently combined with System Dynamics and Discrete Event modeling. Several multi-paradigm model architectures are suggested.",
"title": ""
},
{
"docid": "b0cc7d5313acaa47eb9cba9e830fa9af",
"text": "Data-driven intelligent transportation systems utilize data resources generated within intelligent systems to improve the performance of transportation systems and provide convenient and reliable services. Traffic data refer to datasets generated and collected on moving vehicles and objects. Data visualization is an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data. This paper introduces the basic concept and pipeline of traffic data visualization, provides an overview of related data processing techniques, and summarizes existing methods for depicting the temporal, spatial, numerical, and categorical properties of traffic data.",
"title": ""
},
{
"docid": "f4b270b09649ba05dd22d681a2e3e3b7",
"text": "Advanced analytical techniques are gaining popularity in addressing complex classification type decision problems in many fields including healthcare and medicine. In this exemplary study, using digitized signal data, we developed predictive models employing three machine learning methods to diagnose an asthma patient based solely on the sounds acquired from the chest of the patient in a clinical laboratory. Although, the performances varied slightly, ensemble models (i.e., Random Forest and AdaBoost combined with Random Forest) achieved about 90% accuracy on predicting asthma patients, compared to artificial neural networks models that achieved about 80% predictive accuracy. Our results show that noninvasive, computerized lung sound analysis that rely on low-cost microphones and an embedded real-time microprocessor system would help physicians to make faster and better diagnostic decisions, especially in situations where x-ray and CT-scans are not reachable or not available. This study is a testament to the improving capabilities of analytic techniques in support of better decision making, especially in situations constraint by limited resources.",
"title": ""
},
{
"docid": "b75f793f4feac0b658437026d98a1e8b",
"text": "From a certain (admittedly narrow) perspective, one of the annoying features of natural language is the ubiquitous syntactic ambiguity. For a computational model intended to assign syntactic descriptions to natural language text, this seem like a design defect. In general, when context and lexical content are taken into account, such syntactic ambiguity can be resolved: sentences used in context show, for the most part, little ambiguity. But the grammar provides many alternative analyses, and gives little guidance about resolving the ambiguity. Prepositional phrase attachment is the canonical case of structural ambiguity, as in the time worn example,",
"title": ""
},
{
"docid": "d63543712b2bebfbd0ded148225bb289",
"text": "This paper surveys recent literature in the area of Neural Network, Data Mining, Hidden Markov Model and Neuro-Fuzzy system used to predict the stock market fluctuation. Neural Networks and Neuro-Fuzzy systems are identified to be the leading machine learning techniques in stock market index prediction area. The Traditional techniques are not cover all the possible relation of the stock price fluctuations. There are new approaches to known in-depth of an analysis of stock price variations. NN and Markov Model can be used exclusively in the finance markets and forecasting of stock price. In this paper, we propose a forecasting method to provide better an accuracy rather traditional method. Forecasting stock return is an important financial subject that has attracted researchers’ attention for many years. It involves an assumption that fundamental information publicly available in the past has some predictive relationships to the future stock returns.",
"title": ""
},
{
"docid": "41261cf72d8ee3bca4b05978b07c1c4f",
"text": "The association of Sturge-Weber syndrome with naevus of Ota is an infrequently reported phenomenon and there are only four previously described cases in the literature. In this paper we briefly review the literature regarding the coexistence of vascular and pigmentary naevi and present an additional patient with the association of the Sturge-Weber syndrome and naevus of Ota.",
"title": ""
},
{
"docid": "f741eb8ca9fb9798fb89674a0e045de9",
"text": "We investigate the issue of model uncertainty in cross-country growth regressions using Bayesian Model Averaging (BMA). We find that the posterior probability is very spread among many models suggesting the superiority of BMA over choosing any single model. Out-of-sample predictive results support this claim. In contrast with Levine and Renelt (1992), our results broadly support the more “optimistic” conclusion of Sala-i-Martin (1997b), namely that some variables are important regressors for explaining cross-country growth patterns. However, care should be taken in the methodology employed. The approach proposed here is firmly grounded in statistical theory and immediately leads to posterior and predictive inference.",
"title": ""
},
{
"docid": "516f4b7bea87fad16b774a7f037efaec",
"text": "BACKGROUND\nOperating rooms (ORs) are resource-intense and costly hospital units. Maximizing OR efficiency is essential to maintaining an economically viable institution. OR efficiency projects often focus on a limited number of ORs or cases. Efforts across an entire OR suite have not been reported. Lean and Six Sigma methodologies were developed in the manufacturing industry to increase efficiency by eliminating non-value-added steps. We applied Lean and Six Sigma methodologies across an entire surgical suite to improve efficiency.\n\n\nSTUDY DESIGN\nA multidisciplinary surgical process improvement team constructed a value stream map of the entire surgical process from the decision for surgery to discharge. Each process step was analyzed in 3 domains, ie, personnel, information processed, and time. Multidisciplinary teams addressed 5 work streams to increase value at each step: minimizing volume variation; streamlining the preoperative process; reducing nonoperative time; eliminating redundant information; and promoting employee engagement. Process improvements were implemented sequentially in surgical specialties. Key performance metrics were collected before and after implementation.\n\n\nRESULTS\nAcross 3 surgical specialties, process redesign resulted in substantial improvements in on-time starts and reduction in number of cases past 5 pm. Substantial gains were achieved in nonoperative time, staff overtime, and ORs saved. These changes resulted in substantial increases in margin/OR/day.\n\n\nCONCLUSIONS\nUse of Lean and Six Sigma methodologies increased OR efficiency and financial performance across an entire operating suite. Process mapping, leadership support, staff engagement, and sharing performance metrics are keys to enhancing OR efficiency. The performance gains were substantial, sustainable, positive financially, and transferrable to other specialties.",
"title": ""
},
{
"docid": "9098d40a9e16a1bd1ed0a9edd96f3258",
"text": "The filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) is being studied by many researchers as a key enabler for the fifth-generation air interface. In this paper, a hybrid peak-to-average power ratio (PAPR) reduction scheme is proposed for FBMC/OQAM signals by utilizing multi data block partial transmit sequence (PTS) and tone reservation (TR). In the hybrid PTS-TR scheme, the data blocks signal is divided into several segments, and the number of data blocks in each segment is determined by the overlapping factor. In each segment, we select the optimal data block to transmit and jointly consider the adjacent overlapped data block to achieve minimum signal power. Then, the peak reduction tones are utilized to cancel the peaks of the segment FBMC/OQAM signals. Simulation results and analysis show that the proposed hybrid PTS-TR scheme could provide better PAPR reduction than conventional PTS and TR schemes in FBMC/OQAM systems. Furthermore, we propose another multi data block hybrid PTS-TR scheme by exploiting the adjacent multi overlapped data blocks, called as the multi hybrid (M-hybrid) scheme. Simulation results show that the M-hybrid scheme can achieve about 0.2-dB PAPR performance better than the hybrid PTS-TR scheme.",
"title": ""
},
{
"docid": "be3e02812e35000b39e4608afc61f229",
"text": "The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: [email protected] (Luiz Souza), [email protected] (Luciano Oliveira), [email protected] (Mauricio Pamplona), [email protected] (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.",
"title": ""
},
{
"docid": "91c937ddfcf7aa0957e1c9a997149f87",
"text": "Generative adversarial training can be generally understood as minimizing certain moment matching loss defined by a set of discriminator functions, typically neural networks. The discriminator set should be large enough to be able to uniquely identify the true distribution (discriminative), and also be small enough to go beyond memorizing samples (generalizable). In this paper, we show that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions. This is a very mild condition satisfied even by neural networks with a single neuron. Further, we develop generalization bounds between the learned distribution and true distribution under different evaluation metrics. When evaluated with neural distance, our bounds show that generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. When evaluated with KL divergence, our bound provides an explanation on the counter-intuitive behaviors of testing likelihood in GAN training. Our analysis sheds lights on understanding the practical performance of GANs.",
"title": ""
},
{
"docid": "976064ba00f4eb2020199f264d29dae2",
"text": "Social network analysis is a large and growing body of research on the measurement and analysis of relational structure. Here, we review the fundamental concepts of network analysis, as well as a range of methods currently used in the field. Issues pertaining to data collection, analysis of single networks, network comparison, and analysis of individual-level covariates are discussed, and a number of suggestions are made for avoiding common pitfalls in the application of network methods to substantive questions.",
"title": ""
},
{
"docid": "6b5bde39af1260effa0587d8c6afa418",
"text": "This survey highlights the major issues concerning privacy and security in online social networks. Firstly, we discuss research that aims to protect user data from the various attack vantage points including other users, advertisers, third party application developers, and the online social network provider itself. Next we cover social network inference of user attributes, locating hubs, and link prediction. Because online social networks are so saturated with sensitive information, network inference plays a major privacy role. As a response to the issues brought forth by client-server architectures, distributed social networks are discussed. We then cover the challenges that providers face in maintaining the proper operation of an online social network including minimizing spam messages, and reducing the number of sybil accounts. Finally, we present research in anonymizing social network data. This area is of particular interest in order to continue research in this field both in academia and in industry.",
"title": ""
},
{
"docid": "01b147cb417ceedf40dadcb3ee31a1b2",
"text": "BACKGROUND\nPurposeful and timely rounding is a best practice intervention to routinely meet patient care needs, ensure patient safety, decrease the occurrence of patient preventable events, and proactively address problems before they occur. The Institute for Healthcare Improvement (IHI) endorsed hourly rounding as the best way to reduce call lights and fall injuries, and increase both quality of care and patient satisfaction. Nurse knowledge regarding purposeful rounding and infrastructure supporting timeliness are essential components for consistency with this patient centred practice.\n\n\nOBJECTIVES\nThe project aimed to improve patient satisfaction and safety through implementation of purposeful and timely nursing rounds. Goals for patient satisfaction scores and fall volume were set. Specific objectives were to determine current compliance with evidence-based criteria related to rounding times and protocols, improve best practice knowledge among staff nurses, and increase compliance with these criteria.\n\n\nMETHODS\nFor the objectives of this project the Joanna Briggs Institute's Practical Application of Clinical Evidence System and Getting Research into Practice audit tool were used. Direct observation of staff nurses on a medical surgical unit in the United States was employed to assess timeliness and utilization of a protocol when rounding. Interventions were developed in response to baseline audit results. A follow-up audit was conducted to determine compliance with the same criteria. For the project aims, pre- and post-intervention unit-level data related to nursing-sensitive elements of patient satisfaction and safety were compared.\n\n\nRESULTS\nRounding frequency at specified intervals during awake and sleeping hours nearly doubled. Use of a rounding protocol increased substantially to 64% compliance from zero. Three elements of patient satisfaction had substantive rate increases but the hospital's goals were not reached. Nurse communication and pain management scores increased modestly (5% and 11%, respectively). Responsiveness of hospital staff increased moderately (15%) with a significant sub-element increase in toileting (41%). Patient falls decreased by 50%.\n\n\nCONCLUSIONS\nNurses have the ability to improve patient satisfaction and patient safety outcomes by utilizing nursing round interventions which serve to improve patient communication and staff responsiveness. Having a supportive infrastructure and an organized approach, encompassing all levels of staff, to meet patient needs during their hospital stay was a key factor for success. Hard-wiring of new practices related to workflow takes time as staff embrace change and understand how best practice interventions significantly improve patient outcomes.",
"title": ""
},
{
"docid": "ec681bc427c66adfad79008840ea9b60",
"text": "With the rapid development of the Computer Science and Technology, It has become a major problem for the users that how to quickly find useful or needed information. Text categorization can help people to solve this question. The feature selection method has become one of the most critical techniques in the field of the text automatic categorization. A new method of the text feature selection based on Information Gain and Genetic Algorithm is proposed in this paper. This method chooses the feature based on information gain with the frequency of items. Meanwhile, for the information filtering systems, this method has been improved fitness function to fully consider the characteristics of weight, text and vector similarity dimension, etc. The experiment has proved that the method can reduce the dimension of text vector and improve the precision of text classification.",
"title": ""
},
{
"docid": "1733a6f167e7e13bc816b7fc546e19e3",
"text": "As many other machine learning driven medical image analysis tasks, skin image analysis suffers from a chronic lack of labeled data and skewed class distributions, which poses problems for the training of robust and well-generalizing models. The ability to synthesize realistic looking images of skin lesions could act as a reliever for the aforementioned problems. Generative Adversarial Networks (GANs) have been successfully used to synthesize realistically looking medical images, however limited to low resolution, whereas machine learning models for challenging tasks such as skin lesion segmentation or classification benefit from much higher resolution data. In this work, we successfully synthesize realistically looking images of skin lesions with GANs at such high resolution. Therefore, we utilize the concept of progressive growing, which we both quantitatively and qualitatively compare to other GAN architectures such as the DCGAN and the LAPGAN. Our results show that with the help of progressive growing, we can synthesize highly realistic dermoscopic images of skin lesions that even expert dermatologists find hard to distinguish from real ones.",
"title": ""
}
] |
scidocsrr
|
f71fbccca7f7cca0a0e87fce5e1e9f92
|
Generative Adversarial Privacy
|
[
{
"docid": "e083b5fdf76bab5cdc8fcafc77db23f7",
"text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.",
"title": ""
},
{
"docid": "5c716fbdc209d5d9f703af1e88f0d088",
"text": "Protecting visual secrets is an important problem due to the prevalence of cameras that continuously monitor our surroundings. Any viable solution to this problem should also minimize the impact on the utility of applications that use images. In this work, we build on the existing work of adversarial learning to design a perturbation mechanism that jointly optimizes privacy and utility objectives. We provide a feasibility study of the proposed mechanism and present ideas on developing a privacy framework based on the adversarial perturbation mechanism.",
"title": ""
}
] |
[
{
"docid": "6b8281957b0fd7e9ff88f64b8b6462aa",
"text": "As Critical National Infrastructures are becoming more vulnerable to cyber attacks, their protection becomes a significant issue for any organization as well as a nation. Moreover, the ability to attribute is a vital element of avoiding impunity in cyberspace. In this article, we present main threats to critical infrastructures along with protective measures that one nation can take, and which are classified according to legal, technical, organizational, capacity building, and cooperation aspects. Finally we provide an overview of current methods and practices regarding cyber attribution and cyber peace keeping.",
"title": ""
},
{
"docid": "791f440add573b1c35daca1d6eb7bcf4",
"text": "PURPOSE\nNivolumab, a programmed death-1 (PD-1) immune checkpoint inhibitor antibody, has demonstrated improved survival over docetaxel in previously treated advanced non-small-cell lung cancer (NSCLC). First-line monotherapy with nivolumab for advanced NSCLC was evaluated in the phase I, multicohort, Checkmate 012 trial.\n\n\nMETHODS\nFifty-two patients received nivolumab 3 mg/kg intravenously every 2 weeks until progression or unacceptable toxicity; postprogression treatment was permitted per protocol. The primary objective was to assess safety; secondary objectives included objective response rate (ORR) and 24-week progression-free survival (PFS) rate; overall survival (OS) was an exploratory end point.\n\n\nRESULTS\nAny-grade treatment-related adverse events (AEs) occurred in 71% of patients, most commonly: fatigue (29%), rash (19%), nausea (14%), diarrhea (12%), pruritus (12%), and arthralgia (10%). Ten patients (19%) reported grade 3 to 4 treatment-related AEs; grade 3 rash was the only grade 3 to 4 event occurring in more than one patient (n = 2; 4%). Six patients (12%) discontinued because of a treatment-related AE. The confirmed ORR was 23% (12 of 52), including four ongoing complete responses. Nine of 12 responses (75%) occurred by first tumor assessment (week 11); eight (67%) were ongoing (range, 5.3+ to 25.8+ months) at the time of data lock. ORR was 28% (nine of 32) in patients with any degree of tumor PD-ligand 1 expression and 14% (two of 14) in patients with no PD-ligand 1 expression. Median PFS was 3.6 months, and the 24-week PFS rate was 41% (95% CI, 27 to 54). Median OS was 19.4 months, and the 1-year and 18-month OS rates were 73% (95% CI, 59 to 83) and 57% (95% CI, 42 to 70), respectively.\n\n\nCONCLUSION\nFirst-line nivolumab monotherapy demonstrated a tolerable safety profile and durable responses in first-line advanced NSCLC.",
"title": ""
},
{
"docid": "0ae0e78ac068d8bc27d575d90293c27b",
"text": "Deep web refers to the hidden part of the Web that remains unavailable for standard Web crawlers. To obtain content of Deep Web is challenging and has been acknowledged as a significant gap in the coverage of search engines. To this end, the paper proposes a novel deep web crawling framework based on reinforcement learning, in which the crawler is regarded as an agent and deep web database as the environment. The agent perceives its current state and selects an action (query) to submit to the environment according to Q-value. The framework not only enables crawlers to learn a promising crawling strategy from its own experience, but also allows for utilizing diverse features of query keywords. Experimental results show that the method outperforms the state of art methods in terms of crawling capability and breaks through the assumption of full-text search implied by existing methods.",
"title": ""
},
{
"docid": "d8acda345bbcb1ef25e3ee9934dd12a2",
"text": "This chapter looks into the key infrastructure factors affecting the success of small companies in developing economies that are establishing B2B ecommerce ventures by aggregating critical success factors from general ecommerce studies and studies from e-commerce in developing countries. The factors were identified through a literature review and case studies of two organizations. The results of the pilot study and literature review reveal five groups of success factors that contribute to the success of B2B e-commerce. These factors were later assessed for importance using a survey. The outcome of our analysis reveals a reduced list of key critical success factors that SMEs should emphasize as well as a couple of key policy implications for governments in developing countries. This chapter appears in the book, e-Business, e-Government & Small and Medium-Sized Enterprises: Opportunities and Challenges, edited by Brian J. Corbitt and Nabeel Al-Qirim. Copyright © 2004, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. 701 E. Chocolate Avenue, Suite 200, Hershey PA 17033-1240, USA Tel: 717/533-8845; Fax 717/533-8661; URL-http://www.idea-group.com IDEA GROUP PUBLISHING 186 Jennex, Amoroso and Adelakun Copyright © 2004, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. INTRODUCTION Information and Communication Technology (ICT) can provide a small enterprise an opportunity to conduct business anywhere. Use of the Internet allows small businesses to project virtual storefronts to the world as well as conduct business with other organizations. Heeks and Duncombe (2001) discuss how IT can be used in developing countries to build businesses. Domaracki (2001) discusses how the technology gap between small and large businesses is closing and evening the playing field, making B2B and B2C e-commerce available to any business with access to computers, web browsers, and telecommunication links. This chapter discusses how small start-up companies can use ICT to establish e-commerce applications within developing economies where the infrastructure is not classified as “high-technology”. E-commerce is the process of buying, selling, or exchanging products, services, and information using computer networks including the Internet (Turban et al., 2002). Kalakota and Whinston (1997) define e-commerce using the perspectives of network communications, automated business processes, automated services, and online buying and selling. Turban et al. (2002) add perspectives on collaboration and community. Deise et al. (2000) describe the E-selling process as enabling customers through E-Browsing (catalogues, what we have), E-Buying (ordering, processing, invoicing, cost determination, etc.), and E-Customer Service (contact, etc.). Partial e-commerce occurs when the process is not totally using networks. B2C e-commerce is the electronic sale of goods, services, and content to individuals, Noyce (2002), Turban et al. (2002). B2B e-commerce is a transaction conducted electronically between businesses over the Internet, extranets, intranets, or private networks. Such transactions may be conducted between a business and its supply chain members, as well as between a business and any other business. A business refers to any organization, public or private, for profit or nonprofit (Turban et al., 2002, p. 217; Noyce, 2002; Palvia and Vemuri, 2002). Initially, B2B was used almost exclusively by large organizations to buy and sell industrial outputs and/or inputs. More recently B2B has expanded to small and medium sized enterprises, SMEs, who can buy and/or sell products/services directly, Mayer-Guell (2001). B2B transactions tend to be larger in value, more complex, and longer term when compared to B2C transactions with the average B2B transaction being worth $75,000.00 while the average B2C transaction is worth $75.00 (Freeman, 2001). Typical B2B transactions involve order management, credit management and the establishment of trade terms, product delivery and billing, invoice approval, payment, and the management of information for the entire process, Domaracki (2001). Noyce (2002) discusses collaboration as the underlying principle for B2B. The companies chosen as mini-cases for this study meet the basic definition of B2B with their e-commerce ventures as both are selling services over the Internet to other business organizations. Additionally, both provide quotes and the ability to 19 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/chapter/b2b-commerce-infrastructuresuccess-factors/8749?camid=4v1 This title is available in InfoSci-Books, Business-TechnologySolution, InfoSci-Business Technologies, Business, Administration, and Management, InfoSci-Select, InfoSciBusiness and Management, InfoSci-Government and Law, InfoSci-Select, InfoSci-Select. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=1",
"title": ""
},
{
"docid": "5896289f0a9b788ef722756953a580ce",
"text": "Biodiesel, defined as the mono-alkyl esters of vegetable oils or animal fats, is an balternativeQ diesel fuel that is becoming accepted in a steadily growing number of countries around the world. Since the source of biodiesel varies with the location and other sources such as recycled oils are continuously gaining interest, it is important to possess data on how the various fatty acid profiles of the different sources can influence biodiesel fuel properties. The properties of the various individual fatty esters that comprise biodiesel determine the overall fuel properties of the biodiesel fuel. In turn, the properties of the various fatty esters are determined by the structural features of the fatty acid and the alcohol moieties that comprise a fatty ester. Structural features that influence the physical and fuel properties of a fatty ester molecule are chain length, degree of unsaturation, and branching of the chain. Important fuel properties of biodiesel that are influenced by the fatty acid profile and, in turn, by the structural features of the various fatty esters are cetane number and ultimately exhaust emissions, heat of combustion, cold flow, oxidative stability, viscosity, and lubricity. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "5179662c841302180848dc566a114f10",
"text": "Hyperspectral image (HSI) unmixing has attracted increasing research interests in recent decades. The major difficulty of it lies in that the endmembers and the associated abundances need to be separated from highly mixed observation data with few a priori information. Recently, sparsity-constrained nonnegative matrix factorization (NMF) algorithms have been proved effective for hyperspectral unmixing (HU) since they can sufficiently utilize the sparsity property of HSIs. In order to improve the performance of NMF-based unmixing approaches, spectral and spatial constrains have been added into the unmixing model, but spectral-spatial joint structure is required to be more accurately estimated. To exploit the property that similar pixels within a small spatial neighborhood have higher possibility to share similar abundances, hypergraph structure is employed to capture the similarity relationship among the spatial nearby pixels. In the construction of a hypergraph, each pixel is taken as a vertex of the hypergraph, and each vertex with its k nearest spatial neighboring pixels form a hyperedge. Using the hypergraph, the pixels with similar abundances can be accurately found, which enables the unmixing algorithm to obtain promising results. Experiments on synthetic data and real HSIs are conducted to investigate the performance of the proposed algorithm. The superiority of the proposed algorithm is demonstrated by comparing it with some state-of-the-art methods.",
"title": ""
},
{
"docid": "587eea887a3fcb6561833c250ae9c6e3",
"text": "We present a new interactive and online approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems where capture, labeling, and batch learning often take hours or even days to perform, our approach is fully online. This provides users with continuous live feedback of the recognition during capture, allowing to immediately correct errors in the segmentation and/or learning—a feature that has so far been unavailable to batch and offline methods. This leads to models that are tailored or personalized specifically to the user's environments and object classes of interest, opening up the potential for new applications in augmented reality, interior design, and human/robot navigation. It also provides the ability to capture substantial labeled 3D datasets for training large-scale visual recognition systems.",
"title": ""
},
{
"docid": "d509601659e2192fb4ea8f112c9d75fe",
"text": "Computer vision has advanced significantly that many discriminative approaches such as object recognition are now widely used in real applications. We present another exciting development that utilizes generative models for the mass customization of medical products such as dental crowns. In the dental industry, it takes a technician years of training to design synthetic crowns that restore the function and integrity of missing teeth. Each crown must be customized to individual patients, and it requires human expertise in a time-consuming and laborintensive process, even with computer assisted design software. We develop a fully automatic approach that learns not only from human designs of dental crowns, but also from natural spatial profiles between opposing teeth. The latter is hard to account for by technicians but important for proper biting and chewing functions. Built upon a Generative Adversarial Network architecture (GAN), our deep learning model predicts the customized crown-filled depth scan from the crown-missing depth scan and opposing depth scan. We propose to incorporate additional space constraints and statistical compatibility into learning. Our automatic designs exceed human technicians’ standards for good morphology and functionality, and our algorithm is being tested for production use.",
"title": ""
},
{
"docid": "ee79f55fe096b195984ecdc1fc570179",
"text": "In bibliographies like DBLP and Citeseer, there are three kinds of entity-name problems that need to be solved. First, multiple entities share one name, which is called the name sharing problem. Second, one entity has different names, which is called the name variant problem. Third, multiple entities share multiple names, which is called the name mixing problem. We aim to solve these problems based on one model in this paper. We call this task complete entity resolution. Different from previous work, our work use global information based on data with two types of information, words and author names. We propose a generative latent topic model that involves both author names and words — the LDA-dual model, by extending the LDA (Latent Dirichlet Allocation) model. We also propose a method to obtain model parameters that is global information. Based on obtained model parameters, we propose two algorithms to solve the three problems mentioned above. Experimental results demonstrate the effectiveness and great potential of the proposed model and algorithms.",
"title": ""
},
{
"docid": "a1292045684debec0e6e56f7f5e85fad",
"text": "BACKGROUND\nLncRNA and microRNA play an important role in the development of human cancers; they can act as a tumor suppressor gene or an oncogene. LncRNA GAS5, originating from the separation from tumor suppressor gene cDNA subtractive library, is considered as an oncogene in several kinds of cancers. The expression of miR-221 affects tumorigenesis, invasion and metastasis in multiple types of human cancers. However, there's very little information on the role LncRNA GAS5 and miR-221 play in CRC. Therefore, we conducted this study in order to analyze the association of GAS5 and miR-221 with the prognosis of CRC and preliminary study was done on proliferation, metastasis and invasion of CRC cells. In the present study, we demonstrate the predictive value of long non-coding RNA GAS5 (lncRNA GAS5) and mircoRNA-221 (miR-221) in the prognosis of colorectal cancer (CRC) and their effects on CRC cell proliferation, migration and invasion.\n\n\nMETHODS\nOne hundred and fifty-eight cases with CRC patients and 173 cases of healthy subjects that with no abnormalities, who've been diagnosed through colonoscopy between January 2012 and January 2014 were selected for the study. After the clinicopathological data of the subjects, tissue, plasma and exosomes were collected, lncRNA GAS5 and miR-221 expressions in tissues, plasma and exosomes were measured by reverse transcription quantitative polymerase chain reaction (RT-qPCR). The diagnostic values of lncRNA GAS5 and miR-221 expression in tissues, plasma and exosomes in patients with CRC were analyzed using receiver operating characteristic curve (ROC). Lentiviral vector was constructed for the overexpression of lncRNA GAS5, and SW480 cell line was used for the transfection of the experiment and assigned into an empty vector and GAS5 groups. The cell proliferation, migration and invasion were tested using a cell counting kit-8 assay and Transwell assay respectively.\n\n\nRESULTS\nThe results revealed that LncRNA GAS5 was upregulated while the miR-221 was downregulated in the tissues, plasma and exosomes of patients with CRC. The results of ROC showed that the expressions of both lncRNA GAS5 and miR-221 in the tissues, plasma and exosomes had diagnostic value in CRC. While the LncRNA GAS5 expression in tissues, plasma and exosomes were associated with the tumor node metastasis (TNM) stage, Dukes stage, lymph node metastasis (LNM), local recurrence rate and distant metastasis rate, the MiR-221 expression in tissues, plasma and exosomes were associated with tumor size, TNM stage, Dukes stage, LNM, local recurrence rate and distant metastasis rate. LncRNA GAS5 and miR-221 expression in tissues, plasma and exosomes were found to be independent prognostic factors for CRC. Following the overexpression of GAS5, the GAS5 expressions was up-regulated and miR-221 expression was down-regulated; the rate of cell proliferation, migration and invasion were decreased.",
"title": ""
},
{
"docid": "553a86035f5013595ef61c4c19997d7c",
"text": "This paper proposes a novel self-oscillating, boost-derived (SOBD) dc-dc converter with load regulation. This proposed topology utilizes saturable cores (SCs) to offer self-oscillating and output regulation capabilities. Conventionally, the self-oscillating dc transformer (SODT) type of scheme can be implemented in a very cost-effective manner. The ideal dc transformer provides both input and output currents as pure, ripple-free dc quantities. However, the structure of an SODT-type converter will not provide regulation, and its oscillating frequency will change in accordance with the load. The proposed converter with SCs will allow output-voltage regulation to be accomplished by varying only the control current between the transformers, as occurs in a pulse-width modulation (PWM) converter. A control network that combines PWM schemes with a regenerative function is used for this converter. The optimum duty cycle is implemented to achieve low levels of input- and output-current ripples, which are characteristic of an ideal dc transformer. The oscillating frequency will spontaneously be kept near-constant, regardless of the load, without adding any auxiliary or compensation circuits. The typical voltage waveforms of the transistors are found to be close to quasisquare. The switching surges are well suppressed, and the voltage stress of the component is well clamped. The turn-on/turn-off of the switch is zero-voltage switching (ZVS), and its resonant transition can occur over a wide range of load current levels. A prototype circuit of an SOBD converter shows 86% efficiency at 48-V input, with 12-V, 100-W output, and presents an operating frequency of 100 kHz.",
"title": ""
},
{
"docid": "85d4675562eb87550c3aebf0017e7243",
"text": "Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We introduce an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events. We describe a Web service that leverages this framework to track political memes in Twitter and help detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We present some cases of abusive behaviors uncovered by our service. Finally, we discuss promising preliminary results on the detection of suspicious memes via supervised learning based on features extracted from the topology of the diffusion networks, sentiment analysis, and crowdsourced annotations.",
"title": ""
},
{
"docid": "58042f8c83e5cc4aa41e136bb4e0dc1f",
"text": "In this paper, we propose wire-free integrated sensors that monitor pulse wave velocity (PWV) and respiration, both non-electrical vital signs, by using an all-electrical method. The key techniques that we employ to obtain all-electrical and wire-free measurement are bio-impedance (BI) and analog-modulated body-channel communication (BCC), respectively. For PWV, time difference between ECG signal from the heart and BI signal from the wrist is measured. To remove wires and avoid sampling rate mismatch between ECG and BI sensors, ECG signal is sent to the BI sensor via analog BCC without any sampling. For respiration measurement, BI sensor is located at the abdomen to detect volume change during inhalation and exhalation. A prototype chip fabricated in 0.11 μm CMOS process consists of ECG, BI sensor and BCC transceiver. Measurement results show that heart rate and PWV are both within their normal physiological range. The chip consumes 1.28 mW at 1.2 V supply while occupying 5 mm×2.5 mm of area.",
"title": ""
},
{
"docid": "268a9b3a1a567c25c5ba93708b0a167b",
"text": "Many types of relations in physical, biological, social and information systems can be modeled as homogeneous or heterogeneous concept graphs. Hence, learning from and with graph embeddings has drawn a great deal of research interest recently, but developing an embedding learning method that is flexible enough to accommodate variations in physical networks is still a challenging problem. In this paper, we conjecture that the one-shot supervised learning mechanism is a bottleneck in improving the performance of the graph embedding learning, and propose to extend this by introducing a multi-shot \"unsupervised\" learning framework where a 2-layer MLP network for every shot .The framework can be extended to accommodate a variety of homogeneous and heterogeneous networks. Empirical results on several real-world data set show that the proposed model consistently and significantly outperforms existing state-of-the-art approaches on knowledge base completion and graph based multi-label classification tasks.",
"title": ""
},
{
"docid": "98cd53e6bf758a382653cb7252169d22",
"text": "We introduce a novel malware detection algorithm based on the analysis of graphs constructed from dynamically collected instruction traces of the target executable. These graphs represent Markov chains, where the vertices are the instructions and the transition probabilities are estimated by the data contained in the trace. We use a combination of graph kernels to create a similarity matrix between the instruction trace graphs. The resulting graph kernel measures similarity between graphs on both local and global levels. Finally, the similarity matrix is sent to a support vector machine to perform classification. Our method is particularly appealing because we do not base our classifications on the raw n-gram data, but rather use our data representation to perform classification in graph space. We demonstrate the performance of our algorithm on two classification problems: benign software versus malware, and the Netbull virus with different packers versus other classes of viruses. Our results show a statistically significant improvement over signature-based and other machine learning-based detection methods.",
"title": ""
},
{
"docid": "b1d9e27972b2ea9af105bc6c026fddc9",
"text": "Data diversity is critical to success when training deep learning models. Medical imaging data sets are often imbalanced as pathologic findings are generally rare, which introduces significant challenges when training deep learning models. In this work, we propose a method to generate synthetic abnormal MRI images with brain tumors by training a generative adversarial network using two publicly available data sets of brain MRI. We demonstrate two unique benefits that the synthetic images provide. First, we illustrate improved performance on tumor segmentation by leveraging the synthetic images as a form of data augmentation. Second, we demonstrate the value of generative models as an anonymization tool, achieving comparable tumor segmentation results when trained on the synthetic data versus when trained on real subject data. Together, these results offer a potential solution to two of the largest challenges facing machine learning in medical imaging, namely the small incidence of pathological findings, and the restrictions around sharing of patient data.",
"title": ""
},
{
"docid": "9b656d1ae57b43bb2ccf2d971e46eae3",
"text": "On the one hand, enterprises manufacturing any kinds of goods require agile production technology to be able to fully accommodate their customers’ demand for flexibility. On the other hand, Smart Objects, such as networked intelligent machines or tagged raw materials, exhibit ever increasing capabilities, up to the point where they offer their smart behaviour as web services. The two trends towards higher flexibility and more capable objects will lead to a service-oriented infrastructure where complex processes will span over all types of systems — from the backend enterprise system down to the Smart Objects. To fully support this, we present SOCRADES, an integration architecture that can serve the requirements of future manufacturing. SOCRADES provides generic components upon which sophisticated production processes can be modelled. In this paper we in particular give a list of requirements, the design, and the reference implementation of that integration architecture.",
"title": ""
},
{
"docid": "7519e3a8326e2ef2ebd28c22e80c4e34",
"text": "This paper presents a synthetic framework identifying the central drivers of start-up commercialization strategy and the implications of these drivers for industrial dynamics. We link strategy to the commercialization environment – the microeconomic and strategic conditions facing a firm that is translating an \" idea \" into a value proposition for customers. The framework addresses why technology entrepreneurs in some environments undermine established firms, while others cooperate with incumbents and reinforce existing market power. Our analysis suggests that competitive interaction between start-up innovators and established firms depends on the presence or absence of a \" market for ideas. \" By focusing on the operating requirements, efficiency, and institutions associated with markets for ideas, this framework holds several implications for the management of high-technology entrepreneurial firms. (Stern). We would like to thank the firms who participate in the MIT Commercialization Strategies survey for their time and effort. The past two decades have witnessed a dramatic increase in investment in technology entrepreneurship – the founding of small, start-up firms developing inventions and technology with significant potential commercial application. Because of their youth and small size, start-up innovators usually have little experience in the markets for which their innovations are most appropriate, and they have at most two or three technologies at the stage of potential market introduction. For these firms, a key management challenge is how to translate promising",
"title": ""
},
{
"docid": "bd2fcdd0b7139bf719f1ec7ffb4fe5d5",
"text": "Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.",
"title": ""
},
{
"docid": "1d507afcd430b70944bd7f460ee90277",
"text": "Moringa oleifera, or the horseradish tree, is a pan-tropical species that is known by such regional names as benzolive, drumstick tree, kelor, marango, mlonge, mulangay, nébéday, saijhan, and sajna. Over the past two decades, many reports have appeared in mainstream scientific journals describing its nutritional and medicinal properties. Its utility as a non-food product has also been extensively described, but will not be discussed herein, (e.g. lumber, charcoal, fencing, water clarification, lubricating oil). As with many reports of the nutritional or medicinal value of a natural product, there are an alarming number of purveyors of “healthful” food who are now promoting M. oleifera as a panacea. While much of this recent enthusiasm indeed appears to be justified, it is critical to separate rigorous scientific evidence from anecdote. Those who charge a premium for products containing Moringa spp. must be held to a high standard. Those who promote the cultivation and use of Moringa spp. in regions where hope is in short supply must be provided with the best available evidence, so as not to raise false hopes and to encourage the most fruitful use of scarce research capital. It is the purpose of this series of brief reviews to: (a) critically evaluate the published scientific evidence on M. oleifera, (b) highlight claims from the traditional and tribal medicinal lore and from non-peer reviewed sources that would benefit from further, rigorous scientific evaluation, and (c) suggest directions for future clinical research that could be carried out by local investigators in developing regions. This is the first of four planned papers on the nutritional, therapeutic, and prophylactic properties of Moringa oleifera. In this introductory paper, the scientific evidence for health effects are summarized in tabular format, and the strength of evidence is discussed in very general terms. A second paper will address a select few uses of Moringa in greater detail than they can be dealt with in the context of this paper. A third paper will probe the phytochemical components of Moringa in more depth. A fourth paper will lay out a number of suggested research projects that can be initiated at a very small scale and with very limited resources, in geographic regions which are suitable for Moringa cultivation and utilization. In advance of this fourth paper in the series, the author solicits suggestions and will gladly acknowledge contributions that are incorporated into the final manuscript. It is the intent and hope of the journal’s editors that such a network of small-scale, locally executed investigations might be successfully woven into a greater fabric which will have enhanced scientific power over similar small studies conducted and reported in isolation. Such an approach will have the added benefit that statistically sound planning, peer review, and multi-center coordination brings to a scientific investigation. Copyright: ©2005 Jed W. Fahey This is an Open Access article distributed under the terms of the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Contact: Jed W. Fahey Email: [email protected] Received: September 15, 2005 Accepted: November 20, 2005 Published: December 1, 2005 The electronic version of this article is the complete one and can be found online at: http://www.TFLJournal.org/article.php/200512011",
"title": ""
}
] |
scidocsrr
|
3484f4181d878358a50d88cd8b4c00fb
|
Efficient and extensible security enforcement using dynamic data flow analysis
|
[
{
"docid": "1a0d0b0b38e6d6434448cee8959c58a8",
"text": "This paper reports the first results of an investigation into solutions to problems of security in computer systems; it establishes the basis for rigorous investigation by providing a general descriptive model of a computer system. Borrowing basic concepts and constructs from general systems theory, we present a basic result concerning security in computer systems, using precise notions of \"security\" and \"compromise\". We also demonstrate how a change in requirements can be reflected in the resulting mathematical model. A lengthy introductory section is included in order to bridge the gap between general systems theory and practical problem solving. ii PREFACE General systems theory is a relatively new and rapidly growing mathematical discipline which shows great promise for application in the computer sciences. The discipline includes both \"general systems-theory\" and \"general-systems-theory\": that is, one may properly read the phrase \"general systems theory\" in both ways. In this paper, we have borrowed from the works of general systems theorists, principally from the basic work of Mesarovic´, to formulate a mathematical framework within which to deal with the problems of secure computer systems. At the present time we feel that the mathematical representation developed herein is adequate to deal with most if not all of the security problems one may wish to pose. In Section III we have given a result which deals with the most trivial of the secure computer systems one might find viable in actual use. In the concluding section we review the application of our mathematical methodology and suggest major areas of concern in the design of a secure system. The results reported in this paper lay the groundwork for further, more specific investigation into secure computer systems. The investigation will proceed by specializing the elements of the model to represent particular aspects of system design and operation. Such an investigation will be reported in the second volume of this series where we assume a system with centralized access control. A preliminary investigation of distributed access is just beginning; the results of that investigation would be reported in a third volume of the series.",
"title": ""
}
] |
[
{
"docid": "26d235dbaa2bfd6bdf81cbd78610b68c",
"text": "In the information systems (IS) domain, technology adoption has been one of the most extensively researched areas. Although in the last decade various models had been introduced to address the acceptance or rejection of information systems, there is still a lack of existing studies regarding a comprehensive review and classification of researches in this area. The main objective of this study is steered toward gaining a comprehensive understanding of the progresses made in the domain of IT adoption research, by highlighting the achievements, setbacks, and prospects recorded in this field so as to be able to identify existing research gaps and prospective areas for future research. This paper aims at providing a comprehensive review on the current state of IT adoption research. A total of 330 articles published in IS ranked journals between the years 2006 and 2015 in the domain of IT adoption were reviewed. The research scope was narrowed to six perspectives, namely year of publication, theories underlining the technology adoption, level of research, dependent variables, context of the technology adoption, and independent variables. In this research, information on trends in IT adoption is provided by examining related research works to provide insights and future direction on technology adoption for practitioners and researchers. This paper highlights future research paths that can be taken by researchers who wish to endeavor in technology adoption research. It also summarizes the key findings of previous research works including statistical findings of factors that had been introduced in IT adoption studies.",
"title": ""
},
{
"docid": "0705cadb5baa97c4995c9b829389810c",
"text": "The production and culture of new species of mushrooms is increasing. The breeding of new strains has significantly improved, allowing the use of strains with high yield and resistance to diseases, increasing productivity and diminishing the use of chemicals for pest control. The improvement and development of modern technologies, such as computerized control, automated mushroom harvesting, preparation of compost, production of mushrooms in a non-composted substrate, and new methods of substrate sterilization and spawn preparation, will increase the productivity of mushroom culture. All these aspects are crucial for the production of mushrooms with better flavor, appearance, texture, nutritional qualities, and medicinal properties at low cost. Mushroom culture is a biotechnological process that recycles ligninocellulosic wastes, since mushrooms are food for human consumption and the spent substrate can be used in different ways.",
"title": ""
},
{
"docid": "69ad93c7b6224321d69456c23a4185ce",
"text": "Modeling fashion compatibility is challenging due to its complexity and subjectivity. Existing work focuses on predicting compatibility between product images (e.g. an image containing a t-shirt and an image containing a pair of jeans). However, these approaches ignore real-world ‘scene’ images (e.g. selfies); such images are hard to deal with due to their complexity, clutter, variations in lighting and pose (etc.) but on the other hand could potentially provide key context (e.g. the user’s body type, or the season) for making more accurate recommendations. In this work, we propose a new task called ‘Complete the Look’, which seeks to recommend visually compatible products based on scene images. We design an approach to extract training data for this task, and propose a novel way to learn the scene-product compatibility from fashion or interior design images. Our approach measures compatibility both globally and locally via CNNs and attention mechanisms. Extensive experiments show that our method achieves significant performance gains over alternative systems. Human evaluation and qualitative analysis are also conducted to further understand model behavior. We hope this work could lead to useful applications which link large corpora of real-world scenes with shoppable products.",
"title": ""
},
{
"docid": "a6834bf39e84e4aa9964a7b01e79095f",
"text": "As in many neural network architectures, the use of Batch Normalization (BN) has become a common practice for Generative Adversarial Networks (GAN). In this paper, we propose using Euclidean reconstruction error on a test set for evaluating the quality of GANs. Under this measure, together with a careful visual analysis of generated samples, we found that while being able to speed training during early stages, BN may have negative effects on the quality of the trained model and the stability of the training process. Furthermore, Weight Normalization, a more recently proposed technique, is found to improve the reconstruction, training speed and especially the stability of GANs, and thus should be used in place of BN in GAN training.",
"title": ""
},
{
"docid": "f6f6f322118f5240aec5315f183a76ab",
"text": "Learning from data sets that contain very few instances of the minority class usually produces biased classifiers that have a higher predictive accuracy over the majority class, but poorer predictive accuracy over the minority class. SMOTE (Synthetic Minority Over-sampling Technique) is specifically designed for learning from imbalanced data sets. This paper presents a modified approach (MSMOTE) for learning from imbalanced data sets, based on the SMOTE algorithm. MSMOTE not only considers the distribution of minority class samples, but also eliminates noise samples by adaptive mediation. The combination of MSMOTE and AdaBoost are applied to several highly and moderately imbalanced data sets. The experimental results show that the prediction performance of MSMOTE is better than SMOTEBoost in the minority class and F-values are also improved.",
"title": ""
},
{
"docid": "64c2b9f59a77f03e6633e5804356e9fc",
"text": "AbstructWe present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (Le., two extra disks) is based on ReedSolomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error.",
"title": ""
},
{
"docid": "f70c07e15c4070edf75e8846b4dff0b3",
"text": "Polyphenols, including flavonoids, phenolic acids, proanthocyanidins and resveratrol, are a large and heterogeneous group of phytochemicals in plant-based foods, such as tea, coffee, wine, cocoa, cereal grains, soy, fruits and berries. Growing evidence indicates that various dietary polyphenols may influence carbohydrate metabolism at many levels. In animal models and a limited number of human studies carried out so far, polyphenols and foods or beverages rich in polyphenols have attenuated postprandial glycemic responses and fasting hyperglycemia, and improved acute insulin secretion and insulin sensitivity. The possible mechanisms include inhibition of carbohydrate digestion and glucose absorption in the intestine, stimulation of insulin secretion from the pancreatic beta-cells, modulation of glucose release from the liver, activation of insulin receptors and glucose uptake in the insulin-sensitive tissues, and modulation of intracellular signalling pathways and gene expression. The positive effects of polyphenols on glucose homeostasis observed in a large number of in vitro and animal models are supported by epidemiological evidence on polyphenol-rich diets. To confirm the implications of polyphenol consumption for prevention of insulin resistance, metabolic syndrome and eventually type 2 diabetes, human trials with well-defined diets, controlled study designs and clinically relevant end-points together with holistic approaches e.g., systems biology profiling technologies are needed.",
"title": ""
},
{
"docid": "3cbd22082a7bf570520e8175dff30bf7",
"text": "Gender dysphoria is suggested to be a consequence of sex atypical cerebral differentiation. We tested this hypothesis in a magnetic resonance study of voxel-based morphometry and structural volumetry in 48 heterosexual men (HeM) and women (HeW) and 24 gynephillic male to female transsexuals (MtF-TR). Specific interest was paid to gray matter (GM) and white matter (WM) fraction, hemispheric asymmetry, and volumes of the hippocampus, thalamus, caudate, and putamen. Like HeM, MtF-TR displayed larger GM volumes than HeW in the cerebellum and lingual gyrus and smaller GM and WM volumes in the precentral gyrus. Both male groups had smaller hippocampal volumes than HeW. As in HeM, but not HeW, the right cerebral hemisphere and thalamus volume was in MtF-TR lager than the left. None of these measures differed between HeM and MtF-TR. MtF-TR displayed also singular features and differed from both control groups by having reduced thalamus and putamen volumes and elevated GM volumes in the right insular and inferior frontal cortex and an area covering the right angular gyrus.The present data do not support the notion that brains of MtF-TR are feminized. The observed changes in MtF-TR bring attention to the networks inferred in processing of body perception.",
"title": ""
},
{
"docid": "2509b427f650c7fc54cdb5c38cdb2bba",
"text": "Inbreeding depression on female fertility and calving ease in Spanish dairy cattle was studied by the traditional inbreeding coefficient (F) and an alternative measurement indicating the inbreeding rate (DeltaF) for each animal. Data included records from 49,497 and 62,134 cows for fertility and calving ease, respectively. Both inbreeding measurements were included separately in the routine genetic evaluation models for number of insemination to conception (sequential threshold animal model) and calving ease (sire-maternal grandsire threshold model). The F was included in the model as a categorical effect, whereas DeltaF was included as a linear covariate. Inbred cows showed impaired fertility and tended to have more difficult calvings than low or noninbred cows. Pregnancy rate decreased by 1.68% on average for cows with F from 6.25 to 12.5%. This amount of inbreeding, however, did not seem to increase dystocia incidence. Inbreeding depression was larger for F greater than 12.5%. Cows with F greater than 25% had lower pregnancy rate and higher dystocia rate (-6.37 and 1.67%, respectively) than low or noninbred cows. The DeltaF had a significant effect on female fertility. A DeltaF = 0.01, corresponding to an inbreeding coefficient of 5.62% for the average equivalent generations in the data used (5.68), lowered pregnancy rate by 1.5%. However, the posterior estimate for the effect of DeltaF on calving ease was not significantly different from zero. Although similar patterns were found with both F and DeltaF, the latter detected a lowered pregnancy rate at an equivalent F, probably because it may consider the known depth of the pedigree. The inbreeding rate might be an alternative choice to measure inbreeding depression.",
"title": ""
},
{
"docid": "6c3d5a7f92d68863ef484d5367267eaf",
"text": "This paper complements a series of works on implicative verbs such as manage to and fail to. It extends the description of simple implicative verbs to phrasal implicatives as take the time to and waste the chance to. It shows that the implicative signatures of over 300 verb-noun collocations depend both on the semantic type of the verb and the semantic type of the noun in a systematic way.",
"title": ""
},
{
"docid": "69c8584255b16e6bc05fdfc6510d0dc4",
"text": "OBJECTIVE\nThis study assesses the psychometric properties of Ward's seven-subtest short form (SF) for WAIS-IV in a sample of adults with schizophrenia (SZ) and schizoaffective disorder.\n\n\nMETHOD\nSeventy patients diagnosed with schizophrenia or schizoaffective disorder were administered the full version of the WAIS-IV. Four different versions of the Ward's SF were then calculated. The subtests used were: Similarities, Digit Span, Arithmetic, Information, Coding, Picture Completion, and Block Design (BD version) or Matrix Reasoning (MR version). Prorated and regression-based formulae were assessed for each version.\n\n\nRESULTS\nThe actual and estimated factorial indexes reflected the typical pattern observed in schizophrenia. The four SFs correlated significantly with their full-version counterparts, but the Perceptual Reasoning Index (PRI) correlated below the acceptance threshold for all four versions. The regression-derived estimates showed larger differences compared to the full form. The four forms revealed comparable but generally low clinical category agreement rates for factor indexes. All SFs showed an acceptable reliability, but they were not correlated with clinical outcomes.\n\n\nCONCLUSIONS\nThe WAIS-IV SF offers a good estimate of WAIS-IV intelligence quotient, which is consistent with previous results. Although the overall scores are comparable between the four versions, the prorated forms provided a better estimation of almost all indexes. MR can be used as an alternative for BD without substantially changing the psychometric properties of the SF. However, we recommend a cautious use of these abbreviated forms when it is necessary to estimate the factor index scores, especially PRI, and Processing Speed Index.",
"title": ""
},
{
"docid": "3848dd7667a25e8e7f69ecc318324224",
"text": "This paper describes the CloudProtect middleware that empowers users to encrypt sensitive data stored within various cloud applications. However, most web applications require data in plaintext for implementing the various functionalities and in general, do not support encrypted data management. Therefore, CloudProtect strives to carry out the data transformations (encryption/decryption) in a manner that is transparent to the application, i.e., preserves all functionalities of the application, including those that require data to be in plaintext. Additionally, CloudProtect allows users flexibility in trading off performance for security in order to let them optimally balance their privacy needs and usage-experience.",
"title": ""
},
{
"docid": "0f659ff5414e75aefe23bb85127d93dd",
"text": "Important information is captured in medical documents. To make use of this information and intepret the semantics, technologies are required for extracting, analysing and interpreting it. As a result, rich semantics including relations among events, subjectivity or polarity of events, become available. The First Workshop on Extraction and Processing of Rich Semantics from Medical Texts, is devoted to the technologies for dealing with clinical documents for medical information gathering and application in knowledge based systems. New approaches for identifying and analysing rich semantics are presented. In this paper, we introduce the topic and summarize the workshop contributions.",
"title": ""
},
{
"docid": "104d16c298c8790ca8da0df4d7e34a4b",
"text": "musical structure of a culture or genre” (Bharucha 1984, p. 421). So, unlike tonal hierarchies that refer to cognitive representations of the structure of music across different pieces of music in the style, event hierarchies refer to a particular piece of music and the place of each event in that piece. The two hierarchies occupy complementary roles. In listening to music or music-like experimental materials (melodies and harmonic progressions), the listener responds both to the structure provided by the tonal hierarchy and the structure provided by the event hierarchy. Musical activity involves dynamic patterns of stability and instability to which both the tonal and event hierarchies contribute. Understanding the relations between them and their interaction in processing musical structure is a central issue, not yet extensively studied empirically. 3.3 Empirical Research: The Basic Studies This section outlines the classic findings that illustrate tonal relationships and the methodologies used to establish these findings. 3.3.1 The Probe Tone Method Quantification is the first step in empirical studies because it makes possible the kinds of analytic techniques needed to understand complex human behaviors. An experimental method that has been used to quantify the tonal hierarchy is called the probe-tone method (Krumhansl and Shepard 1979). It was based on the observation that if you hear the incomplete ascending C major scale, C-D-E-F-G-A-B, you strongly expect that the next tone will be the high C. It is the next logical tone in the series, proximal to the last tone of the context, B, and it is the tonic of the key. When, in the experiment, incomplete ascending and descending scale contexts were followed by the tone C (the probe tone), listeners rated it highly as to how well it completed the scale (1 = very badly, 7 = very well). Other probe tones, however, also received fairly high ratings, and they were not necessarily those that are close in pitch to the last tone of the context. For example, the more musically trained listeners also gave high ratings to the dominant, G, and the mediant, E, which together with the C form the tonic triad. The tones of the scale received higher ratings than the nonscale tones, C# D# F# G# and A#. Less musically trained listeners were more influenced by how close the probe tone was to the tone sounded most recently at the end of the context, although their ratings also contained some of the tonal hierarchy pattern. A subsequent study used this method with a variety of contexts at the beginning of the trials (Krumhansl and Kessler 1982). Contexts were chosen because they are clear indicators of a key. They included the scale, the tonic triad chord, and chord 56 C.L. Krumhansl and L.L. Cuddy sequences strongly defining major and minor keys. These contexts were followed by all possible probe tones in the 12-tone chromatic scale, which musically trained listeners were instructed to judge in terms of how well they fit with the preceding context in a musical sense. The results for contexts of the same mode (major or minor) were similar when transposed to a common tonic. Also, the results were largely independent of which particular type of context was used (e.g., chord versus chord cadence). Consequently, the rating data were transposed to a common tonic and averaged over the context types. The resulting values are termed standardized key profiles. The values for the major key profile are 6.35, 2.23, 3.48, 2.33, 4.38, 4.09, 2.52, 5.19, 2.39, 3.66, 2.29, 2.88, where the first number corresponds to the mean rating for the tonic of the key, the second to the next of the 12 tones in the chromatic scale, and so on. The values for the minor key context are 6.33, 2.68, 3.52, 5.38, 2.60, 3.53, 2.54, 4.75, 3.98, 2.69, 3.34, 3.17. These are plotted in Fig. 3.1, in which C is assumed to be the tonic. Both major and minor contexts produce clear and musically interpretable hierarchies in the sense that tones are ordered or ranked according to music-theoretic descriptions. The results of these initial studies suggested that it is possible to obtain quantitative judgments of the degree to which different tones are perceived as stable reference tones in musical contexts. The task appeared to be accessible to listeners who differed considerably in their music training. This was important for further investigations of the responses of listeners without knowledge of specialized vocabularies for describing music, or who were unfamiliar with the musical style. Finally, the results in these and many subsequent studies were quite consistent over a variety of task instructions and musical contexts used to induce a sense of key. Quantification Fig. 3.1 (a) Probe tone ratings for a C major context. (b) Probe tone ratings for a C minor context. Values from Krumhansl and Kessler (1982) 57 3 A Theory of Tonal Hierarchies in Music of the tonal hierarchies is an important first step in empirical research but, as seen later, a great deal of research has studied it from a variety of different perspectives. 3.3.2 Converging Evidence To substantiate any theoretical construct, such as the tonal hierarchy, it is important to have evidence from experiments using different methods. This strategy is known as “converging operations” (Garner et al. 1956). This section describes a number of other experimental measures that show influences of the tonal hierarchy. It has an effect on the degree to which tones are perceived as similar to one another (Krumhansl 1979), such that tones high in the hierarchy are perceived as relatively similar to one another. For example, in the key of C major, C and G are perceived as highly related, whereas C# and G# are perceived as distantly related, even though they are just as far apart objectively (in semitones). In addition, a pair of tones is heard as more related when the second is more stable in the tonal hierarchy than the first (compared to the reverse order). For example, the tones F#-G are perceived as more related to one another than are G-F# because G is higher in the tonal hierarchy than F#. Similar temporal-order asymmetries also appear in memory studies. For example, F# is more often confused with G than G is confused with F# (Krumhansl 1979). These data reflect the proposition that each tone is drawn toward, or expected to resolve to, a tone of greater stability in the tonal hierarchy. Janata and Reisberg (1988) showed that the tonal hierarchy also influenced reaction time measures in tasks requiring a categorical judgment about a tone’s key membership. For both scale and chord contexts, faster reaction times (in-key/outof-key) were obtained for tones higher in the hierarchy. In addition, a recency effect was found for the scale context as for the nonmusicians in the original probe tone study (Krumhansl and Shepard 1979). Miyazaki (1989) found that listeners with absolute pitch named tones highest in tonal hierarchy of C major faster and more accurately than other tones. This is remarkable because it suggests that musical training has a very specific effect on the acquisition of absolute pitch. Most of the early piano repertoire is written in the key of C major and closely related keys. All of these listeners began piano lessons as young as 3–5 years of age, and were believed to have acquired absolute pitch through exposure to piano tones. The tonal hierarchy also appears in judgments of what tone constitutes a good phrase ending (Palmer and Krumhansl 1987a, b; Boltz 1989a, b). A number of studies show that the tonal hierarchy is one of the factors that influences expectations for melodic continuations (Schmuckler 1989; Krumhansl 1991, 1995b; Cuddy and Lunney 1995; Krumhansl et al. 1999, 2000). Other factors include pitch proximity, interval size, and melodic direction. The influence of the tonal hierarchy has also been demonstrated in a study of expressive piano performance (Thompson and Cuddy 1997). Expression refers to 58 C.L. Krumhansl and L.L. Cuddy the changes in duration and dynamics (loudness) that performers add beyond the notated music. For the harmonized sequences used in their study, the performance was influenced by the tonal hierarchy. Tones that were tonally stable within a key (higher in the tonal hierarchy) tended to be played for longer duration in the melody than those less stable (lower in the tonal hierarchy). A method used more recently (Aarden 2003, described in Huron 2006) is a reaction-time task in which listeners had to judge whether unfamiliar melodies went up, down, or stayed the same (a tone was repeated). The underlying idea is that reaction times should be faster when the tone conforms to listeners’ expectations. His results confirmed this hypothesis, namely, that reaction times were faster for tones higher in the hierarchy. As described later, his data conformed to a very large statistical analysis he did of melodies in major and minor keys. Finally, tonal expectations result in event-related potentials (ERPs), changes in electrical potentials measured on the surface of the head (Besson and Faïta 1995; Besson et al. 1998). A larger P300 component, a positive change approximately 300 ms after the final tone, was found when a melody ended with a tone out of the scale of its key than a tone in the scale. This finding was especially true for musicians and familiar melodies, suggesting that learning plays some role in producing the effect; however, the effect was also present in nonmusicians, only to a lesser degree. This section has cited only a small proportion of the studies that have been conducted on tonal hierarchies. A closely related issue that has also been studied extensively is the existence of, and the effects of, a hierarchy of chords. The choice of the experiments reviewed here was to illustrate the variety of approaches that have been taken. Across the studies, consistent effects were found with many different kinds of experimental",
"title": ""
},
{
"docid": "5dbd994583805d41fb34837ca52fc712",
"text": "This editorial is part of a For-Discussion-Section of Methods of Information in Medicine about the paper \"Evidence-based Health informatics: How do we know what we know?\", written by Elske Ammenwerth [1]. Health informatics uses and applications have crept up on health systems over half a century, starting as simple automation of large-scale calculations, but now manifesting in many cases as rule- and algorithm-based creation of composite clinical analyses and 'black box' computation of clinical aspects, as well as enablement of increasingly complex care delivery modes and consumer health access. In this process health informatics has very largely bypassed the rules of precaution, proof of effectiveness, and assessment of safety applicable to all other health sciences and clinical support systems. Evaluation of informatics applications, compilation and recognition of the importance of evidence, and normalisation of Evidence Based Health Informatics, are now long overdue on grounds of efficiency and safety. Ammenwerth has now produced a rigorous analysis of the current position on evidence, and evaluation as its lifeblood, which demands careful study then active promulgation. Decisions based on political aspirations, 'modernisation' hopes, and unsupported commercial claims must cease - poor decisions are wasteful and bad systems can kill. Evidence Based Health Informatics should be promoted, and expected by users, as rigorously as Cochrane promoted Effectiveness and Efficiency, and Sackett promoted Evidence Based Medicine - both of which also were introduced retrospectively to challenge the less robust and partially unsafe traditional 'wisdom' in vogue. Ammenwerth's analysis gives the necessary material to promote that mission.",
"title": ""
},
{
"docid": "eab81b9df11e38384f1e49d56cc4e3dc",
"text": "BACKGROUND\nIntraoperative tumour perforation, positive tumour margins, wound complications and local recurrence are frequent difficulties with conventional abdominoperineal resection (APR) for rectal cancer. An alternative technique is the extended posterior perineal approach with gluteus maximus flap reconstruction of the pelvic floor. The aim of this study was to report the technique and early experience of extended APR in a select cohort of patients.\n\n\nMETHODS\nThe principles of operation are that the mesorectum is not dissected off the levator muscles, the perineal dissection is done in the prone position and the levator muscles are resected en bloc with the anus and lower rectum. The perineal defect is reconstructed with a gluteus maximus flap. Between 2001 and 2005, 28 patients with low rectal cancer were treated accordingly at the Karolinska Hospital.\n\n\nRESULTS\nTwo patients had ypT0 tumours, 20 ypT3 and six ypT4 tumours. Bowel perforation occurred in one, the circumferential resection margin (CRM) was positive in two, and four patients had local perineal wound complications. Two patients developed local recurrence after a median follow-up of 16 months.\n\n\nCONCLUSION\nThe extended posterior perineal approach with gluteus maximus flap reconstruction in APR has a low risk of bowel perforation, CRM involvement and local perineal wound complications. The rate of local recurrence may be lower than with conventional APR.",
"title": ""
},
{
"docid": "f5b9cde4b7848f803b3e742298c92824",
"text": "For many years, analysis of short chain fatty acids (volatile fatty acids, VFAs) has been routinely used in identification of anaerobic bacteria. In numerous scientific papers, the fatty acids between 9 and 20 carbons in length have also been used to characterize genera and species of bacteria, especially nonfermentative Gram negative organisms. With the advent of fused silica capillary columns (which allows recovery of hydroxy acids and resolution of many isomers), it has become practical to use gas chromatography of whole cell fatty acid methyl esters to identify a wide range of organisms.",
"title": ""
},
{
"docid": "c410b6cd3f343fc8b8c21e23e58013cd",
"text": "Virtualization is increasingly being used to address server management and administration issues like flexible resource allocation, service isolation and workload migration. In a virtualized environment, the virtual machine monitor (VMM) is the primary resource manager and is an attractive target for implementing system features like scheduling, caching, and monitoring. However, the lackof runtime information within the VMM about guest operating systems, sometimes called the semantic gap, is a significant obstacle to efficiently implementing some kinds of services.In this paper we explore techniques that can be used by a VMM to passively infer useful information about a guest operating system's unified buffer cache and virtual memory system. We have created a prototype implementation of these techniques inside the Xen VMM called Geiger and show that it can accurately infer when pages are inserted into and evicted from a system's buffer cache. We explore several nuances involved in passively implementing eviction detection that have not previously been addressed, such as the importance of tracking disk block liveness, the effect of file system journaling, and the importance of accounting for the unified caches found in modern operating systems.Using case studies we show that the information provided by Geiger enables a VMM to implement useful VMM-level services. We implement a novel working set size estimator which allows the VMM to make more informed memory allocation decisions. We also show that a VMM can be used to drastically improve the hit rate in remote storage caches by using eviction-based cache placement without modifying the application or operating system storage interface. Both case studies hint at a future where inference techniques enable a broad new class of VMM-level functionality.",
"title": ""
},
{
"docid": "91c792fac981d027ac1f2a2773674b10",
"text": "Cancer is a molecular disease associated with alterations in the genome, which, thanks to the highly improved sensitivity of mutation detection techniques, can be identified in cell-free DNA (cfDNA) circulating in blood, a method also called liquid biopsy. This is a non-invasive alternative to surgical biopsy and has the potential of revealing the molecular signature of tumors to aid in the individualization of treatments. In this review, we focus on cfDNA analysis, its advantages, and clinical applications employing genomic tools (NGS and dPCR) particularly in the field of oncology, and highlight its valuable contributions to early detection, prognosis, and prediction of treatment response.",
"title": ""
},
{
"docid": "38e7a36e4417bff60f9ae0dbb7aaf136",
"text": "Asynchronous implementation techniques, which measure logic delays at runtime and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst case delays at design time and constrain the clock cycle accordingly. Desynchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus, permitting widespread adoption of asynchronicity without requiring special design skills or tools. In this paper, different protocols for desynchronization are first studied, and their correctness is formally proven using techniques originally developed for distributed deployment of synchronous language specifications. A taxonomy of existing protocols for asynchronous latch controllers, covering, in particular, the four-phase handshake protocols devised in the literature for micropipelines, is also provided. A new controller that exhibits provably maximal concurrency is then proposed, and the performance of desynchronized circuits is analyzed with respect to the original synchronous optimized implementation. Finally, this paper proves the feasibility and effectiveness of the proposed approach by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architecture",
"title": ""
}
] |
scidocsrr
|
ccaae771adaf42a8c6afed7d8a0f2821
|
Benchmarking modern distributed streaming platforms
|
[
{
"docid": "4fa73e04ccc8620c12aaea666ea366a6",
"text": "The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be used on even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The tricks of locality-sensitive hashing are explained. This body of knowledge, which deserves to be more widely known, is essential when seeking similar objects in a very large collection without having to compare each pair of objects. Stream processing algorithms for mining data that arrives too fast for exhaustive processing are also explained. The PageRank idea and related tricks for organizing the Web are covered next. Other chapters cover the problems of finding frequent itemsets and clustering, each from the point of view that the data is too large to fit in main memory, and two applications: recommendation systems and Web advertising, each vital in e-commerce. This second edition includes new and extended coverage on social networks, machine learning and dimensionality reduction. Written by leading authorities in database and web technologies, it is essential reading for students and practitioners alike",
"title": ""
}
] |
[
{
"docid": "73cfe07d02651eee42773824d03dcfa1",
"text": "Discovery of usage patterns from Web data is one of the primary purposes for Web Usage Mining. In this paper, a technique to generate Significant Usage Patterns (SUP) is proposed and used to acquire significant “user preferred navigational trails”. The technique uses pipelined processing phases including sub-abstraction of sessionized Web clickstreams, clustering of the abstracted Web sessions, concept-based abstraction of the clustered sessions, and SUP generation. Using this technique, valuable customer behavior information can be extracted by Web site practitioners. Experiments conducted using Web log data provided by J.C.Penney demonstrate that SUPs of different types of customers are distinguishable and interpretable. This technique is particularly suited for analysis of dynamic websites.",
"title": ""
},
{
"docid": "c68ec0f721c8d8bfa27a415ba10708cf",
"text": "Textures are widely used in modern computer graphics. Their size, however, is often a limiting factor. Considering the widespread adaptation of mobile virtual and augmented reality applications, efficient storage of textures has become an important factor.\n We present an approach to analyse textures of a given mesh and compute a new set of textures with the goal of improving storage efficiency and reducing memory requirements. During this process the texture coordinates of the mesh are updated as required. Textures are analysed based on the UV-coordinates of one or more meshes and deconstructed into per-triangle textures. These are further analysed to detect single coloured as well as identical per-triangle textures. Our approach aims to remove these redundancies in order to reduce the amount of memory required to store the texture data. After this analysis, the per-triangle textures are compiled into a new set of texture images of user defined size. Our algorithm aims to pack texture data as tightly as possible in order to reduce the memory requirements.",
"title": ""
},
{
"docid": "5b45931590cb1e20b0a6f546316dc465",
"text": "We consider the task of accurately controlling a complex system, such as autonomously sliding a car sideways into a parking spot. Although certain regions of this domain are extremely hard to model (i.e., the dynamics of the car while skidding), we observe that in practice such systems are often remarkably deterministic over short periods of time, even in difficult-to-model regions. Motivated by this intuition, we develop a probabilistic method for combining closed-loop control in the well-modeled regions and open-loop control in the difficult-to-model regions. In particular, we show that by combining 1) an inaccurate model of the system and 2) a demonstration of the desired behavior, our approach can accurately and robustly control highly challenging systems, without the need to explicitly model the dynamics in the most complex regions and without the need to hand-tune the switching control law. We apply our approach to the task of autonomous sideways sliding into a parking spot, and show that we can repeatedly and accurately control the system, placing the car within about 2 feet of the desired location; to the best of our knowledge, this represents the state of the art in terms of accurately controlling a vehicle in such a maneuver.",
"title": ""
},
{
"docid": "6aed31a677c2fca976c91c67abd1e7b1",
"text": "Facebook is the most popular Social Network Site (SNS) among college students. Despite the popularity and extensive use of Facebook by students, its use has not made significant inroads into classroom usage. In this study, we seek to examine why this is the case and whether it would be worthwhile for faculty to invest the time to integrate Facebook into their teaching. To this end, we decided to undertake a study with a sample of 214 undergraduate students at the University of Huelva (Spain). We applied the structural equation model specifically designed by Mazman and Usluel (2010) to identify the factors that may motivate these students to adopt and use social network tools, specifically Facebook, for educational purposes. According to our results, Social Influence is the most important factor in predicting the adoption of Facebook; students are influenced to adopt it to establish or maintain contact with other people with whom they share interests. Regarding the purposes of Facebook usage, Social Relations is perceived as the most important factor among all of the purposes collected. Our findings also revealed that the educational use of Facebook is explained directly by its purposes of usage and indirectly by its adoption. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "106915eaac271c255aef1f1390577c64",
"text": "Parking is costly and limited in almost every major city in the world. Innovative parking systems for meeting near-term parking demand are needed. This paper proposes a novel, secure, and intelligent parking system (SmartParking) based on secured wireless network and sensor communication. From the point of users' view, SmartParking is a secure and intelligent parking service. The parking reservation is safe and privacy preserved. The parking navigation is convenient and efficient. The whole parking process will be a non-stop service. From the point of management's view, SmartParking is an intelligent parking system. The parking process can be modeled as birth-death stochastic process and the prediction of revenues can be made. Based on the prediction, new business promotion can be made, for example, on-sale prices and new parking fees. In SmartParking, new promotions can be published through wireless network. We address hardware/software architecture, implementations, and analytical models and results. The evaluation of this proposed system proves its efficiency.",
"title": ""
},
{
"docid": "70d7c838e7b5c4318e8764edb5a70555",
"text": "This research developed and tested a model of turnover contagion in which the job embeddedness and job search behaviors of coworkers influence employees’ decisions to quit. In a sample of 45 branches of a regional bank and 1,038 departments of a national hospitality firm, multilevel analysis revealed that coworkers’ job embeddedness and job search behaviors explain variance in individual “voluntary turnover” over and above that explained by other individual and group-level predictors. Broadly speaking, these results suggest that coworkers’ job embeddedness and job search behaviors play critical roles in explaining why people quit their jobs. Implications are discussed.",
"title": ""
},
{
"docid": "e34c102bf9c690e394ce7e373128be10",
"text": "We learn a joint model of sentence extraction and compression for multi-document summarization. Our model scores candidate summaries according to a combined linear model whose features factor over (1) the n-gram types in the summary and (2) the compressions used. We train the model using a marginbased objective whose loss captures end summary quality. Because of the exponentially large set of candidate summaries, we use a cutting-plane algorithm to incrementally detect and add active constraints efficiently. Inference in our model can be cast as an ILP and thereby solved in reasonable time; we also present a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published ROUGE results to date on the TAC 2008 data set.",
"title": ""
},
{
"docid": "28720ce70b52adf92d8924143377ddd6",
"text": "This article describes an approach to building a cost-effective and research-grade visual-inertial (VI) odometry-aided vertical takeoff and landing (VTOL) platform. We utilize an off-the-shelf VI sensor, an onboard computer, and a quadrotor platform, all of which are factory calibrated and mass produced, thereby sharing similar hardware and sensor specifications [e.g., mass, dimensions, intrinsic and extrinsic of camera-inertial measurement unit (IMU) systems, and signal-to-noise ratio]. We then perform system calibration and identification, enabling the use of our VI odometry, multisensor fusion (MSF), and model predictive control (MPC) frameworks with off-the-shelf products. This approach partially circumvents the tedious parameter-tuning procedures required to build a full system. The complete system is extensively evaluated both indoors using a motioncapture system and outdoors using a laser tracker while performing hover and step responses and trajectory-following tasks in the presence of external wind disturbances. We achieve root-mean-square (rms) pose errors of 0.036 m with respect to reference hover trajectories. We also conduct relatively long distance (.180 m) experiments on a farm site, demonstrating a 0.82% drift error of the total flight distance. This article conveys the insights we acquired about the platform and sensor module and offers open-source code with tutorial documentation to the community.",
"title": ""
},
{
"docid": "7ddab8f1a5306062f4b835e7bf696e9e",
"text": "WGCNA begins with the understanding that the information captured by microarray experiments is far richer than a list of differentially expressed genes. Rather, microarray data are more completely represented by considering the relationships between measured transcripts, which can be assessed by pair-wise correlations between gene expression profiles. In most microarray data analyses, however, these relationships go essentially unexplored. WGCNA starts from the level of thousands of genes, identifies clinically interesting gene modules, and finally uses intramodular connectivity, gene significance (e.g. based on the correlation of a gene expression profile with a sample trait) to identify key genes in the disease pathways for further validation. WGCNA alleviates the multiple testing problem inherent in microarray data analysis. Instead of relating thousands of genes to a microarray sample trait, it focuses on the relationship between a few (typically less than 10) modules and the sample trait. Toward this end, it calculates the eigengene significance (correlation between sample trait and eigengene) and the corresponding p-value for each module. The module definition does not make use of a priori defined gene sets. Instead, modules are constructed from the expression data by using hierarchical clustering. Although it is advisable to relate the resulting modules to gene ontology information to assess their biological plausibility, it is not required. Because the modules may correspond to biological pathways, focusing the analysis on intramodular hub genes (or the module eigengenes) amounts to a biologically motivated data reduction scheme. Because the expression profiles of intramodular hub genes are highly correlated, typically dozens of candidate biomarkers result. Although these candidates are statistically equivalent, they may differ in terms of biological plausibility or clinical utility. Gene ontology information can be useful for further prioritizing intramodular hub genes. Examples of biological studies that show the importance of intramodular hub genes can be found reported in [4, 1, 2, 3, 5]. A flow chart of a typical network analysis is shown in Fig. 1. Below we present a short glossary of important network-related terms.",
"title": ""
},
{
"docid": "8f227f66fc7c86c19edae8036c571579",
"text": "Traditionally, the most commonly used source of bibliometric data is Thomson ISI Web of Knowledge, in particular the Web of Science and the Journal Citation Reports (JCR), which provide the yearly Journal Impact Factors (JIF). This paper presents an alternative source of data (Google Scholar, GS) as well as 3 alternatives to the JIF to assess journal impact (h-index, g-index and the number of citations per paper). Because of its broader range of data sources, the use of GS generally results in more comprehensive citation coverage in the area of management and international business. The use of GS particularly benefits academics publishing in sources that are not (well) covered in ISI. Among these are books, conference papers, non-US journals, and in general journals in the field of strategy and international business. The 3 alternative GS-based metrics showed strong correlations with the traditional JIF. As such, they provide academics and universities committed to JIFs with a good alternative for journals that are not ISI-indexed. However, we argue that these metrics provide additional advantages over the JIF and that the free availability of GS allows for a democratization of citation analysis as it provides every academic access to citation data regardless of their institution’s financial means.",
"title": ""
},
{
"docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf",
"text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.",
"title": ""
},
{
"docid": "409d104fa3e992ac72c65b004beaa963",
"text": "The 19-item Body-Image Questionnaire, developed by our team and first published in this journal in 1987 by Bruchon-Schweitzer, was administered to 1,222 male and female French subjects. A principal component analysis of their responses yielded an axis we interpreted as a general Body Satisfaction dimension. The four-factor structure observed in 1987 was not replicated. Body Satisfaction was associated with sex, health, and with current and future emotional adjustment.",
"title": ""
},
{
"docid": "f7dbb8adec55a4c52563194ecb6f3e8a",
"text": "The emotion of gratitude is thought to have social effects, but empirical studies of such effects have focused largely on the repaying of kind gestures. The current research focused on the relational antecedents of gratitude and its implications for relationship formation. The authors examined the role of naturally occurring gratitude in college sororities during a week of gift-giving from older members to new members. New members recorded reactions to benefits received during the week. At the end of the week and 1 month later, the new and old members rated their interactions and their relationships. Perceptions of benefactor responsiveness predicted gratitude for benefits, and gratitude during the week predicted future relationship outcomes. Gratitude may function to promote relationship formation and maintenance.",
"title": ""
},
{
"docid": "2e5a51176d1c0ab5394bb6a2b3034211",
"text": "School transport is used by millions of children worldwide. However, not a substantial effort is done in order to improve the existing school transport systems. This paper presents the development of an IoT based scholar bus monitoring system. The development of new telematics technologies has enabled the development of various Intelligent Transport Systems. However, these are not presented as ITS services to end users. This paper presents the development of an IoT based scholar bus monitoring system that through localization and speed sensors will allow many stakeholders such as parents, the goverment, the school and many other authorities to keep realtime track of the scholar bus behavior, resulting in a better controlled scholar bus.",
"title": ""
},
{
"docid": "61953281f4b568ad15e1f62be9d68070",
"text": "Most of the effort in today’s digital forensics community lies in the retrieval and analysis of existing information from computing systems. Little is being done to increase the quantity and quality of the forensic information on today’s computing systems. In this paper we pose the question of what kind of information is desired on a system by a forensic investigator. We give an overview of the information that exists on current systems and discuss its shortcomings. We then examine the role that file system metadata plays in digital forensics and analyze what kind of information is desirable for different types of forensic investigations, how feasible it is to obtain it, and discuss issues about storing the information.",
"title": ""
},
{
"docid": "2f7edc539bc61f8fc07bc6f5f8e496e0",
"text": "We investigate the contextual multi-armed bandit problem in an adversarial setting and introduce an online algorithm that asymptotically achieves the performance of the best contextual bandit arm selection strategy under certain conditions. We show that our algorithm is highly efficient and provides significantly improved performance with a guaranteed performance upper bound in a strong mathematical sense. We have no statistical assumptions on the context vectors and the loss of the bandit arms, hence our results are guaranteed to hold even in adversarial environments. We use a tree notion in order to partition the space of context vectors in a nested structure. Using this tree, we construct a large class of context dependent bandit arm selection strategies and adaptively combine them to achieve the performance of the best strategy. We use the hierarchical nature of introduced tree to implement this combination with a significantly low computational complexity, thus our algorithm can be efficiently used in applications involving big data. Through extensive set of experiments involving synthetic and real data, we demonstrate significant performance gains achieved by the proposed algorithm with respect to the state-of-the-art adversarial bandit algorithms.",
"title": ""
},
{
"docid": "198944af240d732b6fadcee273c1ba18",
"text": "This paper presents a fast and energy-efficient current mirror based level shifter with wide shifting range from sub-threshold voltage up to I/O voltage. Small delay and low power consumption are achieved by addressing the non-full output swing and charge sharing issues in the level shifter from [4]. The measurement results show that the proposed level shifter can convert from 0.21V up to 3.3V with significantly improved delay and power consumption over the existing level shifters. Compared with [4], the maximum reduction of delay, switching energy and leakage power are 3X, 19X, 29X respectively when converting 0.3V to a higher voltage between 0.6V and 3.3V.",
"title": ""
},
{
"docid": "c221568e2ed4d6192ab04119046c4884",
"text": "An efficient Ultra-Wideband (UWB) Frequency Selective Surface (FSS) is presented to mitigate the potential harmful effects of Electromagnetic Interference (EMI) caused by the radiations emitted by radio devices. The proposed design consists of circular and square elements printed on the opposite surfaces of FR4 substrate of 3.2 mm thickness. It ensures better angular stability by up to 600, bandwidth has been significantly enhanced by up to 16. 21 GHz to provide effective shielding against X-, Ka- and K-bands. While signal attenuation has also been improved remarkably in the desired band compared to the results presented in the latest research. Theoretical results are presented for TE and TM polarization for normal and oblique angles of incidence.",
"title": ""
},
{
"docid": "8143d59b02198a634c15d9f484f37d56",
"text": "The manufacturing industry is faced with strong competition making the companies’ knowledge resources and their systematic management a critical success factor. Yet, existing concepts for the management of process knowledge in manufacturing are characterized by major shortcomings. Particularly, they are either exclusively based on structured knowledge, e. g., formal rules, or on unstructured knowledge, such as documents, and they focus on isolated aspects of manufacturing processes. To address these issues, we present the Manufacturing Knowledge Repository, a holistic repository that consolidates structured and unstructured process knowledge to facilitate knowledge management and process optimization in manufacturing. First, we define requirements, especially the types of knowledge to be handled, e. g., data mining models and text documents. On this basis, we develop a conceptual repository data model associating knowledge items and process components such as machines and process steps. Furthermore, we discuss implementation issues including storage architecture variants and finally present both an evaluation of the data model and a proof of concept based on a prototypical implementation in a case example.",
"title": ""
},
{
"docid": "f119ffed641d2403dbcefad70a0669ac",
"text": "The fast growing market of mobile device adoption and cloud computing has led to exploitation of mobile devices utilizing cloud services. One major challenge facing the usage of mobile devices in the cloud environment is mobile synchronization to the cloud, e.g., synchronizing contacts, text messages, images, and videos. Owing to the expected high volume of traffic and high time complexity required for synchronization, an appropriate synchronization algorithm needs to be developed. Delta synchronization is one method of synchronizing compressed files that requires uploading the whole file, even when no changes were made or if it was only partially changed. In the present study, we proposed an algorithm, based on Delta synchronization, to solve the problem of synchronizing compressed files under various forms of modification (e.g., not modified, partially modified, or completely modified). To measure the efficiency of our proposed algorithm, we compared it to the Dropbox application algorithm. The results demonstrated that our algorithm outperformed the regular Dropbox synchronization mechanism by reducing the synchronization time, cost, and traffic load between clients and the cloud service provider.",
"title": ""
}
] |
scidocsrr
|
5dc0280f612ae8e1f5b1fe359dfbe83b
|
Classification regions of deep neural networks
|
[
{
"docid": "77655e3ed587676df9284c78eb36a438",
"text": "We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.",
"title": ""
},
{
"docid": "430bfb1ae136a7d886b4c96c455ddc59",
"text": "We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of signal propagation in generic, deep neural networks with random weights. Our results reveal an order-to-chaos expressivity phase transition, with networks in the chaotic phase computing nonlinear functions whose global curvature grows exponentially with depth but not width. We prove this generic class of deep random functions cannot be efficiently computed by any shallow network, going beyond prior work restricted to the analysis of single functions. Moreover, we formalize and quantitatively demonstrate the long conjectured idea that deep networks can disentangle highly curved manifolds in input space into flat manifolds in hidden space. Our theoretical analysis of the expressive power of deep networks broadly applies to arbitrary nonlinearities, and provides a quantitative underpinning for previously abstract notions about the geometry of deep functions.",
"title": ""
},
{
"docid": "481f4a4b14d4594d8b023f9df074dfeb",
"text": "We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.",
"title": ""
},
{
"docid": "6efdf43a454ce7da51927c07f1449695",
"text": "We investigate efficient representations of functions that can be written as outputs of so-called sum-product networks, that alternate layers of product and sum operations (see Fig 1 for a simple sum-product network). We find that there exist families of such functions that can be represented much more efficiently by deep sum-product networks (i.e. allowing multiple hidden layers), compared to shallow sum-product networks (constrained to using a single hidden layer). For instance, there is a family of functions fn where n is the number of input variables, such that fn can be computed with a deep sum-product network of log 2 n layers and n−1 units, while a shallow sum-product network (two layers) requires 2 √ n−1 units. These mathematical results are in the same spirit as those by H̊astad and Goldmann (1991) on the limitations of small depth computational circuits. They motivate using deep networks to be able to model complex functions more efficiently than with shallow networks. Exponential gains in terms of the number of parameters are quite significant in the context of statistical machine learning. Indeed, the number of training samples required to optimize a model’s parameters without suffering from overfitting typically increases with the number of parameters. Deep networks thus offer a promising way to learn complex functions from limited data, even though parameter optimization may still be challenging.",
"title": ""
}
] |
[
{
"docid": "d30cdd113970fa8570a795af6b5193e1",
"text": "Alignment of time series is an important problem to solve in many scientific disciplines. In particular, temporal alignment of two or more subjects performing similar activities is a challenging problem due to the large temporal scale difference between human actions as well as the inter/intra subject variability. In this paper we present canonical time warping (CTW), an extension of canonical correlation analysis (CCA) for spatio-temporal alignment of human motion between two subjects. CTW extends previous work on CCA in two ways: (i) it combines CCA with dynamic time warping (DTW), and (ii) it extends CCA by allowing local spatial deformations. We show CTW’s effectiveness in three experiments: alignment of synthetic data, alignment of motion capture data of two subjects performing similar actions, and alignment of similar facial expressions made by two people. Our results demonstrate that CTW provides both visually and qualitatively better alignment than state-of-the-art techniques based on DTW.",
"title": ""
},
{
"docid": "14eda321b496bb0ffae79d6532af51ed",
"text": "Agile software development provides a way to organise the complex task of multi-participant software development while accommodating constant project change. Agile software development is well accepted in the practitioner community but there is little understanding of how such projects achieve effective coordination, which is known to be critical in successful software projects. A theoretical model of coordination in the agile software development context is presented based on empirical data from three cases of co-located agile software development. Many practices in these projects act as coordination mechanisms, which together form a coordination strategy. Coordination strategy in this context has three components: synchronisation, structure, and boundary spanning. Coordination effectiveness has two components: implicit and explicit. The theoretical model of coordination in agile software development projects proposes that an agile coordination strategy increases coordination effectiveness. This model has application for practitioners who want to select appropriate practices from agile methods to ensure they achieve coordination coverage in their project. For the field of information systems development, this theory contributes to knowledge of coordination and coordination effectiveness in the context of agile software development. © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "f7fa13048b42a566d8621f267141f80d",
"text": "The software underpinning today's IT systems needs to adapt dynamically and predictably to rapid changes in system workload, environment and objectives. We describe a software framework that achieves such adaptiveness for IT systems whose components can be modelled as Markov chains. The framework comprises (i) an autonomic architecture that uses Markov-chain quantitative analysis to dynamically adjust the parameters of an IT system in line with its state, environment and objectives; and (ii) a method for developing instances of this architecture for real-world systems. Two case studies are presented that use the framework successfully for the dynamic power management of disk drives, and for the adaptive management of cluster availability within data centres, respectively.",
"title": ""
},
{
"docid": "4b43203c83b46f0637d048c7016cce17",
"text": "Efficient detection of three dimensional (3D) objects in point clouds is a challenging problem. Performing 3D descriptor matching or 3D scanning-window search with detector are both time-consuming due to the 3-dimensional complexity. One solution is to project 3D point cloud into 2D images and thus transform the 3D detection problem into 2D space, but projection at multiple viewpoints and rotations produce a large amount of 2D detection tasks, which limit the performance and complexity of the 2D detection algorithm choice. We propose to use convolutional neural network (CNN) for the 2D detection task, because it can handle all viewpoints and rotations for the same class of object together, as well as predicting multiple classes of objects with the same network, without the need for individual detector for each object class. We further improve the detection efficiency by concatenating two extra levels of early rejection networks with binary outputs before the multi-class detection network. Experiments show that our method has competitive overall performance with at least one-order of magnitude speedup comparing with latest 3D point cloud detection methods.",
"title": ""
},
{
"docid": "c821e5d4cc3705b0c3180e802e25b591",
"text": "This paper discusses the issue of profit shifting and ‘aggressive’ tax planning by multinational firms. The paper makes two contributions. Firstly, we provide some background information to the debate by giving a brief overview over existing empirical studies on profit shifting and by describing arrangements for IP-based profit shifting which are used by the companies currently accused of avoiding taxes. We then show that preventing this type of tax avoidance is, in principle, straightforward. Secondly, we argue that, in the short term, policy makers should focus on extending withholding taxes in an internationally coordinated way. Other measures which are currently being discussed, in particular unilateral measures like limitations on interest and license deduction, fundamental reforms of the international tax system and country-by-country reporting, are either economically harmful or need to be elaborated much further before their introduction can be considered. JEL Classification: H20, H25, F23, K34",
"title": ""
},
{
"docid": "78db8b57c3221378847092e5283ad754",
"text": "This paper analyzes correlations and causalities between Bitcoin market indicators and Twitter posts containing emotional signals on Bitcoin. Within a timeframe of 104 days (November 23 2013 March 7 2014), about 160,000 Twitter posts containing ”bitcoin” and a positive, negative or uncertainty related term were collected and further analyzed. For instance, the terms ”happy”, ”love”, ”fun”, ”good”, ”bad”, ”sad” and ”unhappy” represent positive and negative emotional signals, while ”hope”, ”fear” and ”worry” are considered as indicators of uncertainty. The static (daily) Pearson correlation results show a significant positive correlation between emotional tweets and the close price, trading volume and intraday price spread of Bitcoin. However, a dynamic Granger causality analysis does not confirm a causal effect of emotional Tweets on Bitcoin market values. To the contrary, the analyzed data shows that a higher Bitcoin trading volume Granger causes more signals of uncertainty within a 24 to 72hour timeframe. This result leads to the interpretation that emotional sentiments rather mirror the market than that they make it predictable. Finally, the conclusion of this paper is that the microblogging platform Twitter is Bitcoins virtual trading floor, emotionally reflecting its trading dynamics.2",
"title": ""
},
{
"docid": "ebf92a0faf6538f1d2b85fb2aa497e80",
"text": "The generally accepted assumption by most multimedia researchers is that learning is inhibited when on-screen text and narration containing the same information is presented simultaneously, rather than on-screen text or narration alone. This is known as the verbal redundancy effect. Are there situations where the reverse is true? This research was designed to investigate the reverse redundancy effect for non-native English speakers learning English reading comprehension, where two instructional modes were used the redundant mode and the modality mode. In the redundant mode, static pictures and audio narration were presented with synchronized redundant on-screen text. In the modality mode, only static pictures and audio were presented. In both modes, learners were allowed to control the pacing of the lessons. Participants were 209 Yemeni learners in their first year of tertiary education. Examination of text comprehension scores indicated that those learners who were exposed to the redundancy mode performed significantly better than learners in the modality mode. They were also significantly more motivated than their counterparts in the modality mode. This finding has added an important modification to the redundancy effect. That is the reverse redundancy effect is true for multimedia learning of English as a foreign language for students where textual information was foreign to them. In such situations, the redundant synchronized on-screen text did not impede learning; rather it reduced the cognitive load and thereby enhanced learning.",
"title": ""
},
{
"docid": "de03cbaa0b7fd8474f1729fe57ecc8a0",
"text": "Cloud computing is an emerging paradigm that allows users to conveniently access computing resources as pay-per-use services. Whereas cloud offerings such as Amazon’s Elastic Compute Cloud and Google Apps are rapidly gaining a large user base, enterprise software’s migration towards the cloud is still in its infancy. For software vendors the move towardscloud solutions implies profound changes in their value-creation logic. Not only are they forced to deliver fully web-enabled solutions and to replace their license model with service fees, they also need to build the competencies to host and manage business-critical applications for their customers. This motivates our research, which investigates cloud computing’s implications for enterprise software vendors’ business models. From multiple case studies covering traditional and pure cloud providers, we find that moving from on-premise software to cloud services affects all business model components, that is, the customer value proposition, resource base, value configuration, and financial flows. It thus underpins cloud computing’s disruptive nature in the enterprise software domain. By deriving two alternative business model configurations, SaaS and SaaS+PaaS, our research synthesizes the strategic choices for enterprise software vendors and provides guidelines for designing viable business models.",
"title": ""
},
{
"docid": "3bba36e8f3d3a490681e82c8c3a10b11",
"text": "This paper describes the design and implementation of programmable AXI bus Interface modules in Verilog Hardware Description Language (HDL) and implementation in Xilinx Spartan 3E FPGA. All the interface modules are reconfigurable with the data size, burst type, number of transfers in a burst. Multiple masters can communicate with different slave memory locations concurrently. An arbiter controls the burst grant to different bus masters based on Round Robin algorithm. Separate decoder modules are implemented for write address channel, write data channel, write response channel, read address channel, read data channel. The design can support a maximum of 16 masters. All the RTL simulations are performed using Modelsim RTL Simulator. Each independent module is synthesized in XC3S250EPQ208-5 FPGA and the maximum speed is found to be 298.958 MHz. All the design modules can be integrated to create a soft IP for the AXI BUS system.",
"title": ""
},
{
"docid": "5089b13262867f2bd77d85460000cfaa",
"text": "While different optical flow techniques continue to appear, there has been a lack of quantitative evaluation of existing methods. For a common set of real and synthetic image sequences, we report the results of a number of regularly cited optical flow techniques, including instances of differential, matching, energy-based, and phase-based methods. Our comparisons are primarily empirical, and concentrate on the accuracy, reliability, and density of the velocity measurements; they show that performance can differ significantly among the techniques we implemented.",
"title": ""
},
{
"docid": "437bf63857bf42d2e46362475c9badb4",
"text": "Regenerative braking is an effective approach for electric vehicles (EVs) to extend their driving range. A fuzzy-logic-based regenerative braking strategy (RBS) integrated with series regenerative braking is developed in this paper to advance the level of energy-savings. From the viewpoint of securing car stability in braking operations, the braking force distribution between the front and rear wheels so as to accord with the ideal distribution curve are considered to prevent vehicles from experiencing wheel lock and slip phenomena during braking. Then, a fuzzy RBS using the driver’s braking force command, vehicle speed, battery SOC, battery temperature are designed to determine the distribution between friction braking force and regenerative braking force to improve the energy recuperation efficiency. The experimental results on an “LF620” prototype EV validated the feasibility and effectiveness of regenerative braking and showed that the proposed fuzzy RBS was endowed with good control performance. The maximum driving range of LF620 EV was improved by 25.7% compared with non-RBS conditions.",
"title": ""
},
{
"docid": "9dab38b961f4be434c95ca6696ba52bd",
"text": "The widespread use and increasing capabilities of mobiles devices are making them a viable platform for offering mobile services. However, the increasing resource demands of mobile services and the inherent constraints of mobile devices limit the quality and type of functionality that can be offered, preventing mobile devices from exploiting their full potential as reliable service providers. Computation offloading offers mobile devices the opportunity to transfer resource-intensive computations to more resourcefulcomputing infrastructures. We present a framework for cloud-assisted mobile service provisioning to assist mobile devices in delivering reliable services. The framework supports dynamic offloading based on the resource status of mobile systems and current network conditions, while satisfying the user-defined energy constraints. It also enables the mobile provider to delegate the cloud infrastructure to forward the service response directly to the user when no further processing is required by the provider. Performance evaluation shows up to 6x latency improvement for computation-intensive services that do not require large data transfer. Experiments show that the operation of the cloud-assisted service provisioning framework does not pose significant overhead on mobile resources, yet it offers robust and efficient computation offloading.",
"title": ""
},
{
"docid": "c828195cfc88abd598d1825f69932eb0",
"text": "The central purpose of passive signal intercept receivers is to perform automatic categorization of unknown radar signals. Currently, there is an urgent need to develop intelligent classification algorithms for these devices due to emerging complexity of radar waveforms. Especially multifunction radars (MFRs) capable of performing several simultaneous tasks by utilizing complex, dynamically varying scheduled waveforms are a major challenge for automatic pattern classification systems. To assist recognition of complex radar emissions in modern intercept receivers, we have developed a novel method to recognize dynamically varying pulse repetition interval (PRI) modulation patterns emitted by MFRs. We use robust feature extraction and classifier design techniques to assist recognition in unpredictable real-world signal environments. We classify received pulse trains hierarchically which allows unambiguous detection of the subpatterns using a sliding window. Accuracy, robustness and reliability of the technique are demonstrated with extensive simulations using both static and dynamically varying PRI modulation patterns.",
"title": ""
},
{
"docid": "42043ee6577d791874c1aa34baf81e64",
"text": "Bagging, boosting and Random Forests are classical ensemble methods used to improve the performance of single classifiers. They obtain superior performance by increasing the accuracy and diversity of the single classifiers. Attempts have been made to reproduce these methods in the more challenging context of evolving data streams. In this paper, we propose a new variant of bagging, called leveraging bagging. This method combines the simplicity of bagging with adding more randomization to the input, and output of the classifiers. We test our method by performing an evaluation study on synthetic and real-world datasets comprising up to ten million examples.",
"title": ""
},
{
"docid": "c34b474b06d21d1bebdcb8a37b8470c5",
"text": "Using machine learning to analyze data often results in developer exhaust – code, logs, or metadata that do not de ne the learning algorithm but are byproducts of the data analytics pipeline. We study how the rich information present in developer exhaust can be used to approximately solve otherwise complex tasks. Speci cally, we focus on using log data associated with training deep learning models to perform model search by predicting performance metrics for untrainedmodels. Instead of designing a di erent model for each performance metric, we present two preliminary methods that rely only on information present in logs to predict these characteristics for di erent architectures. We introduce (i) a nearest neighbor approachwith a hand-crafted edit distancemetric to comparemodel architectures and (ii) a more generalizable, end-to-end approach that trains an LSTM using model architectures and associated logs to predict performancemetrics of interest.We performmodel search optimizing for best validation accuracy, degree of over tting, and best validation accuracy given a constraint on training time. Our approaches can predict validation accuracy within 1.37% error on average, while the baseline achieves 4.13% by using the performance of a trainedmodel with the closest number of layers.When choosing the best performing model given constraints on training time, our approaches select the top-3 models that overlap with the true top3 models 82% of the time, while the baseline only achieves this 54% of the time. Our preliminary experiments hold promise for how developer exhaust can help learnmodels that can approximate various complex tasks e ciently. ACM Reference Format: Jian Zhang, Max Lam, Stephanie Wang, Paroma Varma, Luigi Nardi, Kunle Olukotun, Christopher Ré. 2018. Exploring the Utility of Developer Exhaust. In DEEM’18: International Workshop on Data Management for End-to-End Machine Learning, June 15, 2018, Houston, TX, USA.",
"title": ""
},
{
"docid": "2f0661006bed34acfc170d98328992af",
"text": "Collision detection and response can make a virtual-reality application seem more believable. Unfortunately , existing collision-detection algorithms are too slow for interactive use. We present a new algorithm that is not only fast but also interruptible, allowing an application to trade quality for more speed. Our algorithm uses simple four-dimensional geometry to approximate motion, and sets of spheres to approximate three-dimensional surfaces. The algorithm allows a sample application to run 5 to 7 times faster than it runs with existing algorithms.",
"title": ""
},
{
"docid": "2c1bd88f0fd23c6b63315aea067670b0",
"text": "This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant [29] of temporal ensembling [14], a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.",
"title": ""
},
{
"docid": "16a384727d6a323437a0b6ed3cdcc230",
"text": "The ability to learn from a small number of examples has been a difficult problem in machine learning since its inception. While methods have succeeded with large amounts of training data, research has been underway in how to accomplish similar performance with fewer examples, known as one-shot or more generally few-shot learning. This technique has been shown to have promising performance, but in practice requires fixed-size inputs making it impractical for production systems where class sizes can vary. This impedes training and the final utility of few-shot learning systems. This paper describes an approach to constructing and training a network that can handle arbitrary example sizes dynamically as the system is used.",
"title": ""
},
{
"docid": "ac1e1d7daed4a960ff3a17a03155ddfa",
"text": "This paper explores the role of the business model in capturing value from early stage technology. A successful business model creates a heuristic logic that connects technical potential with the realization of economic value. The business model unlocks latent value from a technology, but its logic constrains the subsequent search for new, alternative models for other technologies later on—an implicit cognitive dimension overlooked in most discourse on the topic. We explore the intellectual roots of the concept, offer a working definition and show how the Xerox Corporation arose by employing an effective business model to commercialize a technology rejected by other leading companies of the day. We then show the long shadow that this model cast upon Xerox’s later management of selected spin-off companies from Xerox PARC. Xerox evaluated the technical potential of these spin-offs through its own business model, while those spin-offs that became successful did so through evolving business models that came to differ substantially from that of Xerox. The search and learning for an effective business model in failed ventures, by contrast, were quite limited.",
"title": ""
},
{
"docid": "f33f6263ef10bd702ddb18664b68a09f",
"text": "Research over the past five years has shown significant performance improvements using a technique called adaptive compilation. An adaptive compiler uses a compile-execute-analyze feedback loop to find the combination of optimizations and parameters that minimizes some performance goal, such as code size or execution time.Despite its ability to improve performance, adaptive compilation has not seen widespread use because of two obstacles: the large amounts of time that such systems have used to perform the many compilations and executions prohibits most users from adopting these systems, and the complexity inherent in a feedback-driven adaptive system has made it difficult to build and hard to use.A significant portion of the adaptive compilation process is devoted to multiple executions of the code being compiled. We have developed a technique called virtual execution to address this problem. Virtual execution runs the program a single time and preserves information that allows us to accurately predict the performance of different optimization sequences without running the code again. Our prototype implementation of this technique significantly reduces the time required by our adaptive compiler.In conjunction with this performance boost, we have developed a graphical-user interface (GUI) that provides a controlled view of the compilation process. By providing appropriate defaults, the interface limits the amount of information that the user must provide to get started. At the same time, it lets the experienced user exert fine-grained control over the parameters that control the system.",
"title": ""
}
] |
scidocsrr
|
d5e1646fa4a6fa74251f19eebe3cc2c5
|
Lowering the barriers to large-scale mobile crowdsensing
|
[
{
"docid": "b8808d637dcb8bbb430d68196587b3a4",
"text": "Crowd sourcing is based on a simple but powerful concept: Virtually anyone has the potential to plug in valuable information. The concept revolves around large groups of people or community handling tasks that have traditionally been associated with a specialist or small group of experts. With the advent of the smart devices, many mobile applications are already tapping into crowd sourcing to report community issues and traffic problems, but more can be done. While most of these applications work well for the average user, it neglects the information needs of particular user communities. We present CROWDSAFE, a novel convergence of Internet crowd sourcing and portable smart devices to enable real time, location based crime incident searching and reporting. It is targeted to users who are interested in crime information. The system leverages crowd sourced data to provide novel features such as a Safety Router and value added crime analytics. We demonstrate the system by using crime data in the metropolitan Washington DC area to show the effectiveness of our approach. Also highlighted is its ability to facilitate greater collaboration between citizens and civic authorities. Such collaboration shall foster greater innovation to turn crime data analysis into smarter and safe decisions for the public.",
"title": ""
},
{
"docid": "513ae13c6848f3a83c36dc43d34b43a5",
"text": "In this paper, we describe the design, analysis, implementation, and operational deployment of a real-time trip information system that provides passengers with the expected fare and trip duration of the taxi ride they are planning to take. This system was built in cooperation with a taxi operator that operates more than 15,000 taxis in Singapore. We first describe the overall system design and then explain the efficient algorithms used to achieve our predictions based on up to 21 months of historical data consisting of approximately 250 million paid taxi trips. We then describe various optimisations (involving region sizes, amount of history, and data mining techniques) and accuracy analysis (involving routes and weather) we performed to increase both the runtime performance and prediction accuracy. Our large scale evaluation demonstrates that our system is (a) accurate --- with the mean fare error under 1 Singapore dollar (~ 0.76 US$) and the mean duration error under three minutes, and (b) capable of real-time performance, processing thousands to millions of queries per second. Finally, we describe the lessons learned during the process of deploying this system into a production environment.",
"title": ""
}
] |
[
{
"docid": "f720554ba9cff8bec781f4ad2ec538aa",
"text": "English. Hate speech is prevalent in social media platforms. Systems that can automatically detect offensive content are of great value to assist human curators with removal of hateful language. In this paper, we present machine learning models developed at UW Tacoma for detection of misogyny, i.e. hate speech against women, in English tweets, and the results obtained with these models in the shared task for Automatic Misogyny Identification (AMI) at EVALITA2018. Italiano. Commenti offensivi nei confronti di persone con diversa orientazione sessuale o provenienza sociale sono oggigiorno prevalenti nelle piattaforme di social media. A tale fine, sistemi automatici in grado di rilevare contenuti offensivi nei confronti di alcuni gruppi sociali sono importanti per facilitare il lavoro dei moderatori di queste piattaforme a rimuovere ogni commento offensivo usato nei social media. In questo articolo, vi presentiamo sia dei modelli di apprendimento automatico sviluppati all’Università di Washington in Tacoma per il rilevamento della misoginia, ovvero discorsi offensivi usati nei tweet in lingua inglese contro le donne, sia i risultati ottenuti con questi modelli nel processo per l’identificazione automatica della misoginia in EVALITA2018.",
"title": ""
},
{
"docid": "78fe279ca9a3e355726ffacb09302be5",
"text": "In present, dynamically developing organizations, that often realize business tasks using the project-based approach, effective project management is of paramount importance. Numerous reports and scientific papers present lists of critical success factors in project management, and communication management is usually at the very top of the list. But even though the communication practices are found to be associated with most of the success dimensions, they are not given enough attention and the communication processes and practices formalized in the company's project management methodology are neither followed nor prioritized by project managers. This paper aims at supporting project managers and teams in more effective implementation of best practices in communication management by proposing a set of communication management patterns, which promote a context-problem-solution approach to communication management in projects.",
"title": ""
},
{
"docid": "f70bd0a47eac274a1bb3b964f34e0a63",
"text": "Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term ”set classification” instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST.",
"title": ""
},
{
"docid": "02ea5b61b22d5af1b9362ca46ead0dea",
"text": "This paper describes a student project examining mechanisms with which to attack Bluetooth-enabled devices. The paper briefly describes the protocol architecture of Bluetooth and the Java interface that programmers can use to connect to Bluetooth communication services. Several types of attacks are described, along with a detailed example of two attack tools, Bloover II and BT Info.",
"title": ""
},
{
"docid": "1aa3d2456e34c8ab59a340fd32825703",
"text": "It is well known that guided soft tissue healing with a provisional restoration is essential to obtain optimal anterior esthetics in the implant prosthesis. What is not well known is how to transfer a record of beautiful anatomically healed tissue to the laboratory. With the advent of emergence profile healing abutments and corresponding impression copings, there has been a dramatic improvement over the original 4.0-mm diameter design. This is a great improvement, however, it still does not accurately transfer a record of anatomically healed tissue, which is often triangularly shaped, to the laboratory, because the impression coping is a round cylinder. This article explains how to fabricate a \"custom impression coping\" that is an exact record of anatomically healed tissue for accurate duplication. This technique is significant because it allows an even closer replication of the natural dentition.",
"title": ""
},
{
"docid": "3005c32c7cf0e90c59be75795e1c1fbc",
"text": "In this paper, a novel AR interface is proposed that provides generic solutions to the tasks involved in augmenting simultaneously different types of virtual information and processing of tracking data for natural interaction. Participants within the system can experience a real-time mixture of 3D objects, static video, images, textual information and 3D sound with the real environment. The user-friendly AR interface can achieve maximum interaction using simple but effective forms of collaboration based on the combinations of human–computer interaction techniques. To prove the feasibility of the interface, the use of indoor AR techniques are employed to construct innovative applications and demonstrate examples from heritage to learning systems. Finally, an initial evaluation of the AR interface including some initial results is presented.",
"title": ""
},
{
"docid": "695766e9a526a0a25c4de430242e46d2",
"text": "In the large-scale Wireless Metropolitan Area Network (WMAN) consisting of many wireless Access Points (APs),choosing the appropriate position to place cloudlet is very important for reducing the user's access delay. For service provider, it isalways very costly to deployment cloudlets. How many cloudletsshould be placed in a WMAN and how much resource eachcloudlet should have is very important for the service provider. In this paper, we study the cloudlet placement and resourceallocation problem in a large-scale Wireless WMAN, we formulatethe problem as an novel cloudlet placement problem that givenan average access delay between mobile users and the cloudlets, place K cloudlets to some strategic locations in the WMAN withthe objective to minimize the number of use cloudlet K. Wethen propose an exact solution to the problem by formulatingit as an Integer Linear Programming (ILP). Due to the poorscalability of the ILP, we devise a clustering algorithm K-Medoids(KM) for the problem. For a special case of the problem whereall cloudlets computing capabilities have been given, we proposean efficient heuristic for it. We finally evaluate the performanceof the proposed algorithms through experimental simulations. Simulation result demonstrates that the proposed algorithms areeffective.",
"title": ""
},
{
"docid": "d4f3dc5efe166df222b2a617d5fbd5e4",
"text": "IKEA is the largest furniture retailer in the world. Their critical success factor is that IKEA can seamlessly integrate and optimize end-to-end supply chain to maximize customer value, eventually build their dominate position in entire value chain. This article summarizes and analyzes IKEA's successful practices of value chain management. Hopefully it can be a good reference or provide strategic insight for Chinese enterprises.",
"title": ""
},
{
"docid": "935282c2cbfa34ed24bc598a14a85273",
"text": "Cybersecurity is a national priority in this big data era. Because of negative externalities and the resulting lack of economic incentives, companies often underinvest in security controls, despite government and industry recommendations. Although many existing studies on security have explored technical solutions, only a few have looked at the economic motivations. To fill the gap, we propose an approach to increase the incentives of organizations to address security problems. Specifically, we utilize and process existing security vulnerability data, derive explicit security performance information, and disclose the information as feedback to organizations and the public. We regularly release information on the organizations with the worst security behaviors, imposing reputation loss on them. The information is also used by organizations for self-evaluation in comparison to others. Therefore, additional incentives are solicited out of reputation concern and social comparison. To test the effectiveness of our approach, we conducted a field quasi-experiment for outgoing spam for 1,718 autonomous systems in eight countries and published SpamRankings.net, the website we created to release information. We found that the treatment group subject to information disclosure reduced outgoing spam approximately by 16%. We also found that the more observed outgoing spam from the top spammer, the less likely an organization would be to reduce its own outgoing spam, consistent with the prediction by social comparison theory. Our results suggest that social information and social comparison can be effectively leveraged to encourage desirable behavior. Our study contributes to both information architecture design and public policy by suggesting how information can be used as intervention to impose economic incentives. The usual disclaimers apply for NSF grants 1228990 and 0831338.",
"title": ""
},
{
"docid": "72485a3c94c2dfa5121e91f2a3fc0f4a",
"text": "Four experiments support the hypothesis that syntactically relevant information about verbs is encoded in the lexicon in semantic event templates. A verb's event template represents the participants in an event described by the verb and the relations among the participants. The experiments show that lexical decision times are longer for verbs with more complex templates than verbs with less complex templates and that, for both transitive and intransitive sentences, sentences containing verbs with more complex templates take longer to process. In contrast, sentence processing times did not depend on the probabilities with which the verbs appear in transitive versus intransitive constructions in a large corpus of naturally produced sentences.",
"title": ""
},
{
"docid": "ef4272cd4b0d4df9aa968cc9ff528c1e",
"text": "Estimating action quality, the process of assigning a \"score\" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small-typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR ii) LSTM and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of diving, vault, figure skating. SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.",
"title": ""
},
{
"docid": "f376948c1b8952b0b19efad3c5ca0471",
"text": "This essay grew out of an examination of one-tailed significance testing. One-tailed tests were little advocated by the founders of modern statistics but are widely used and recommended nowadays in the biological, behavioral and social sciences. The high frequency of their use in ecology and animal behavior and their logical indefensibil-ity have been documented in a companion review paper. In the present one, we trace the roots of this problem and counter some attacks on significance testing in general. Roots include: the early but irrational dichotomization of the P scale and adoption of the 'significant/non-significant' terminology; the mistaken notion that a high P value is evidence favoring the null hypothesis over the alternative hypothesis; and confusion over the distinction between statistical and research hypotheses. Resultant widespread misuse and misinterpretation of significance tests have also led to other problems, such as unjustifiable demands that reporting of P values be disallowed or greatly reduced and that reporting of confidence intervals and standardized effect sizes be required in their place. Our analysis of these matters thus leads us to a recommendation that for standard types of significance assessment the paleoFisherian and Neyman-Pearsonian paradigms be replaced by a neoFisherian one. The essence of the latter is that a critical α (probability of type I error) is not specified, the terms 'significant' and 'non-significant' are abandoned, that high P values lead only to suspended judgments, and that the so-called \" three-valued logic \" of Cox, Kaiser, Tukey, Tryon and Harris is adopted explicitly. Confidence intervals and bands, power analyses, and severity curves remain useful adjuncts in particular situations. Analyses conducted under this paradigm we term neoFisherian significance assessments (NFSA). Their role is assessment of the existence, sign and magnitude of statistical effects. The common label of null hypothesis significance tests (NHST) is retained for paleoFisherian and Neyman-Pearsonian approaches and their hybrids. The original Neyman-Pearson framework has no utility outside quality control type applications. Some advocates of Bayesian, likelihood and information-theoretic approaches to model selection have argued that P values and NFSAs are of little or no value, but those arguments do not withstand critical review. Champions of Bayesian methods in particular continue to overstate their value and relevance. 312 Hurlbert & Lombardi • ANN. ZOOL. FeNNICI Vol. 46 \" … the object of statistical methods is the reduction of data. A quantity of data … is to be replaced by relatively few quantities which shall …",
"title": ""
},
{
"docid": "4ad3c199ad1ba51372e9f314fc1158be",
"text": "Inner lead bonding (ILB) is used to thermomechanically join the Cu inner leads on a flexible film tape and Au bumps on a driver IC chip to form electrical paths. With the newly developed film carrier assembly technology, called chip on film (COF), the bumps are prepared separately on a film tape substrate and bonded on the finger lead ends beforehand; therefore, the assembly of IC chips can be made much simpler and cheaper. In this paper, three kinds of COF samples, namely forming, wrinkle, and flat samples, were prepared using conventional gang bonder. The peeling test was used to examine the bondability of ILB in terms of the adhesion strength between the inner leads and the bumps. According to the peeling test results, flat samples have competent strength, less variation, and better appearance than when using flip-chip bonder.",
"title": ""
},
{
"docid": "c89ca701d947ba6594be753470f152ac",
"text": "The visualization of an image collection is the process of displaying a collection of images on a screen under some specific layout requirements. This paper focuses on an important problem that is not well addressed by the previous methods: visualizing image collections into arbitrary layout shapes while arranging images according to user-defined semantic or visual correlations (e.g., color or object category). To this end, we first propose a property-based tree construction scheme to organize images of a collection into a tree structure according to user-defined properties. In this way, images can be adaptively placed with the desired semantic or visual correlations in the final visualization layout. Then, we design a two-step visualization optimization scheme to further optimize image layouts. As a result, multiple layout effects including layout shape and image overlap ratio can be effectively controlled to guarantee a satisfactory visualization. Finally, we also propose a tree-transfer scheme such that visualization layouts can be adaptively changed when users select different “images of interest.” We demonstrate the effectiveness of our proposed approach through the comparisons with state-of-the-art visualization techniques.",
"title": ""
},
{
"docid": "e787a1486a6563c15a74a07ed9516447",
"text": "This chapter describes how engineering principles can be used to estimate joint forces. Principles of static and dynamic analysis are reviewed, with examples of static analysis applied to the hip and elbow joints and to the analysis of joint forces in human ancestors. Applications to indeterminant problems of joint mechanics are presented and utilized to analyze equine fetlock joints.",
"title": ""
},
{
"docid": "ade59b46fca7fbf99800370435e1afe6",
"text": "etretinate to PUVA was associated with better treatment response. In our patients with psoriasis, topical PUVA achieved improvement rates comparable with oral PUVA, with a mean cumulative UVA dose of 187.5 J ⁄ cm. Our study contradicts previous observations made in other studies on vitiligo and demonstrates that topical PUVA does have a limited therapeutic effect in vitiligo. Oral and topical PUVA showed low but equal efficacy in patients with vitiligo with a similar mean number of treatments to completion. Approximately one-quarter of our patients with vitiligo had discontinued PUVA therapy, which probably affected the outcome. It has been shown that at least 1 year of continuous and regular therapy with oral PUVA is needed to achieve a sufficient degree of repigmentation. Shorter periods were not found to be sufficient to observe clinical signs of repigmentation. Currently it is not known if the same holds true for topical PUVA. In conclusion, our results show that the efficacy of topical PUVA is comparable with that of oral PUVA, and favoured topical PUVA treatment especially in the eczema group with respect to cumulative UVA doses and success rates. Given the necessity for long-term treatment with local PUVA therapies, successful management requires maintenance of full patient compliance. Because of this, the results in this study should not only be attributed to the therapies. Because of its safety and the simplicity, topical PUVA should be considered as an alternative therapy to other phototherapy methods.",
"title": ""
},
{
"docid": "5a0fe40414f7881cc262800a43dfe4d0",
"text": "In this work, a passive rectifier circuit is presented, which is operating at 868 MHz. It allows energy harvesting from low power RF waves with a high efficiency. It consists of a novel multiplier circuit design and high quality components to reduce parasitic effects, losses and reaches a low startup voltage. Using lower capacitor rises up the switching speed of the whole circuit. An inductor L serves to store energy in a magnetic field during the negative cycle wave and returns it during the positive one. A low pass filter is arranged in cascade with the rectifier circuit to reduce ripple at high frequencies and to get a stable DC signal. A 50 kΩ load is added at the output to measure the output power and to visualize the behavior of the whole circuit. Simulation results show an outstanding potential of this RF-DC converter witch has a relative high sensitivity beginning with -40 dBm.",
"title": ""
},
{
"docid": "ed1a3ca3e558eeb33e2841fa4b9c28d2",
"text": "© 2010 ETRI Journal, Volume 32, Number 4, August 2010 In this paper, we present a low-voltage low-dropout voltage regulator (LDO) for a system-on-chip (SoC) application which, exploiting the multiplication of the Miller effect through the use of a current amplifier, is frequency compensated up to 1-nF capacitive load. The topology and the strategy adopted to design the LDO and the related compensation frequency network are described in detail. The LDO works with a supply voltage as low as 1.2 V and provides a maximum load current of 50 mA with a drop-out voltage of 200 mV: the total integrated compensation capacitance is about 40 pF. Measurement results as well as comparison with other SoC LDOs demonstrate the advantage of the proposed topology.",
"title": ""
},
{
"docid": "e5d323fe9bf2b5800043fa0e4af6849a",
"text": "A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues in the signal. A large body of empirical evidence indicates that word segmentation is promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account of how these cues operate in combination or in conflict is lacking. The present study fills this gap by assessing speech segmentation when cues are systematically pitted against each other. The results demonstrate that listeners do not assign the same power to all segmentation cues; rather, cues are hierarchically integrated, with descending weights allocated to lexical, segmental, and prosodic cues. Lower level cues drive segmentation when the interpretive conditions are altered by a lack of contextual and lexical information or by white noise. Taken together, the results call for an integrated, hierarchical, and signal-contingent approach to speech segmentation.",
"title": ""
}
] |
scidocsrr
|
b8fc65054d305075957542df91cc1e79
|
Towards unified depth and semantic prediction from a single image
|
[
{
"docid": "bd042b5e8d2d92966bde5e224bb8220b",
"text": "Output of our system is a 3D semantic+occupancy map. However due to lack of ground truth in that form, we need to evaluate using indirect approaches. To evaluate the segmentation accuracy, we evaluated it with standard 2D semantic segmentation methods for which human annotated ground truth exists. The 2D segmentation is obtained by back-projecting our 3D map to the camera images. However these kind of evaluation negatively harms our scores for the following reasons:",
"title": ""
},
{
"docid": "8a77882cfe06eaa88db529432ed31b0c",
"text": "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.",
"title": ""
},
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
}
] |
[
{
"docid": "427ce3159bf598c306645a5b9e670c95",
"text": "In recent years, microblogging platforms have become good places to spread various spams, making the problem of gauging information credibility on social networks receive considerable attention especially under an emergency situation. Unlike previous studies on detecting rumors using tweets' inherent attributes generally, in this work, we shift the premise and focus on identifying event rumors on Weibo by extracting features from crowd responses that are texts of retweets (reposting tweets) and comments under a certain social event. Firstly the paper proposes a method of collecting theme data, including a sample set of tweets which have been confirmed to be false rumors based on information from the official rumor-busting service provided by Weibo. Secondly clustering analysis of tweets are made to examine the text features extracted from retweets and comments, and a classifier is trained based on observed feature distribution to automatically judge rumors from a mixed set of valid news and false information. The experiments show that the new features we propose are indeed effective in the classification, and especially some stop words and punctuations which are treated as noises in previous works can play an important role in rumor detection. To the best of our knowledge, this work is the first to detect rumors in Chinese via crowd responses under an emergency situation.",
"title": ""
},
{
"docid": "c3c5931200ff752d8285cc1068e779ee",
"text": "Speech-driven facial animation is the process which uses speech signals to automatically synthesize a talking character. The majority of work in this domain creates a mapping from audio features to visual features. This often requires post-processing using computer graphics techniques to produce realistic albeit subject dependent results. We present a system for generating videos of a talking head, using a still image of a person and an audio clip containing speech, that doesn’t rely on any handcrafted intermediate features. To the best of our knowledge, this is the first method capable of generating subject independent realistic videos directly from raw audio. Our method can generate videos which have (a) lip movements that are in sync with the audio and (b) natural facial expressions such as blinks and eyebrow movements 1. We achieve this by using a temporal GAN with 2 discriminators, which are capable of capturing different aspects of the video. The effect of each component in our system is quantified through an ablation study. The generated videos are evaluated based on their sharpness, reconstruction quality, and lip-reading accuracy. Finally, a user study is conducted, confirming that temporal GANs lead to more natural sequences than a static GAN-based approach.",
"title": ""
},
{
"docid": "dd740ca578eefd345b9b137210fdad82",
"text": "The new ultrafast cardiac single photon emission computed tomography (SPECT) cameras with cadmium-zinc-telluride (CZT)-based detectors are faster and produce higher quality images as compared to conventional SPECT cameras. We assessed the need for additional imaging, total imaging time, tracer dose and 1-year outcome between patients scanned with the CZT camera and a conventional SPECT camera. A total of 456 consecutive stable patients without known coronary artery disease underwent myocardial perfusion imaging on a hybrid SPECT/CT (64-slice) scanner using either conventional (n = 225) or CZT SPECT (n = 231). All patients started with low-dose stress imaging, combined with coronary calcium scoring. Rest imaging was only done when initial stress SPECT testing was equivocal or abnormal. Coronary CT angiography was subsequently performed in cases of ischaemic or equivocal SPECT findings. Furthermore, 1-year clinical follow-up was obtained with regard to coronary revascularization, nonfatal myocardial infarction or death. Baseline characteristics were comparable between the two groups. With the CZT camera, the need for rest imaging (35 vs 56%, p < 0.001) and additional coronary CT angiography (20 vs 28%, p = 0.025) was significantly lower as compared with the conventional camera. This resulted in a lower mean total administered isotope dose per patient (658 ± 390 vs 840 ± 421 MBq, p < 0.001) and shorter imaging time (6.39 ± 1.91 vs 20.40 ± 7.46 min, p < 0.001) with the CZT camera. After 1 year, clinical outcome was comparable between the two groups. As compared to images on a conventional SPECT camera, stress myocardial perfusion images acquired on a CZT camera are more frequently interpreted as normal with identical clinical outcome after 1-year follow-up. This lowers the need for additional testing, results in lower mean radiation dose and shortens imaging time.",
"title": ""
},
{
"docid": "0209132c7623c540c125a222552f33ac",
"text": "This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any strategic elements in the model. Next to identifying the critical factors of the Web marketing, the paper argues that the basis for successful E-Commerce is the full integration of the virtual activities into the company’s physical strategy, marketing plan and organisational processes. The four S elements of the Web-Marketing Mix framework present a sound and functional conceptual basis for designing, developing and commercialising Business-to-Consumer online projects. The model was originally developed for educational purposes and has been tested and refined by means of field projects; two of them are presented as case studies in the paper. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "38c96356f5fd3daef5f1f15a32971b57",
"text": "Recommendation systems make suggestions about artifacts to a user. For instance, they may predict whether a user would be interested in seeing a particular movie. Social recomendation methods collect ratings of artifacts from many individuals and use nearest-neighbor techniques to make recommendations to a user concerning new artifacts. However, these methods do not use the significant amount of other information that is often available about the nature of each artifact -such as cast lists or movie reviews, for example. This paper presents an inductive learning approach to recommendation that is able to use both ratings information and other forms of information about each artifact in predicting user preferences. We show that our method outperforms an existing social-filtering method in the domain of movie recommendations on a dataset of more than 45,000 movie ratings collected from a community of over 250 users. Introduction Recommendations are a part of everyday life. We usually rely on some external knowledge to make informed decisions about a particular artifact or action, for instance when we are going to see a movie or going to see a doctor. This knowledge can be derived from social processes. At other times, our judgments may be based on available information about an artifact and our known preferences. There are many factors which may influence a person in making choices, and ideally one would like to model as many of these factors as possible in a recommendation system. There are some general approaches to this problem. In one approach, the user of the system provides ratings of some artifacts or items. The system makes informed guesses about other items the user may like based on ratings other users have provided. This is the framework for social-filtering methods (Hill, Stead, Rosenstein Furnas 1995; Shardanand & Maes 1995). In a second approach, the system accepts information describing the nature of an item, and based on a sample of the user’s preferences, learns to predict which items the user will like (Lang 1995; Pazzani, Muramatsu, & Billsus 1996). We will call this approach content-based filtering, as it does not rely on social information (in the form of other users’ ratings). Both social and content-based filtering can be cast as learning problems: the objective is to *Department of Computer Science, Rutgers University, Piscataway, NJ 08855 We would like to thank Susan Dumais for useful discussions during the early stages of this work. Copyright ~)1998, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. learn a function that can take a description of a user and an artifact and predict the user’s preferences concerning the artifact. Well-known recommendation systems like Recommender (Hill, Stead, Rosenstein & Furnas 1995) and Firefly (http: //www.firefly.net) (Shardanand & Maes 1995) are based on social-filtering principles. Recommender, the baseline system used in the work reported here, recommends as yet unseen movies to a user based on his prior ratings of movies and their similarity to the ratings of other users. Social-filtering systems perform well using only numeric assessments of worth, i.e., ratings. However, social-filtering methods leave open the question of what role content can play in the recommen-",
"title": ""
},
{
"docid": "168f901cbecec27a71122eea607d17ce",
"text": "This paper introduces Cartograph, a visualization system that harnesses the vast amount of world knowledge encoded within Wikipedia to create thematic maps of almost any data. Cartograph extends previous systems that visualize non-spatial data using geographic approaches. While these systems required data with an existing semantic structure, Cartograph unlocks spatial visualization for a much larger variety of datasets by enhancing input datasets with semantic information extracted from Wikipedia. Cartograph's map embeddings use neural networks trained on Wikipedia article content and user navigation behavior. Using these embeddings, the system can reveal connections between points that are unrelated in the original data sets, but are related in meaning and therefore embedded close together on the map. We describe the design of the system and key challenges we encountered, and we present findings from an exploratory user study",
"title": ""
},
{
"docid": "b5babae9b9bcae4f87f5fe02459936de",
"text": "The study evaluated the effects of formocresol (FC), ferric sulphate (FS), calcium hydroxide (Ca[OH](2)), and mineral trioxide aggregate (MTA) as pulp dressing agents in pulpotomized primary molars. Sixteen children each with at least four primary molars requiring pulpotomy were selected. Eighty selected teeth were divided into four groups and treated with one of the pulpotomy agent. The children were recalled for clinical and radiographic examination every 6 months during 2 years of follow-up. Eleven children with 56 teeth arrived for clinical and radiographic follow-up evaluation at 24 months. The follow-up evaluations revealed that the success rate was 76.9% for FC, 73.3% for FS, 46.1% for Ca(OH)(2), and 66.6% for MTA. In conclusion, Ca(OH)(2)is less appropriate for primary teeth pulpotomies than the other pulpotomy agents. FC and FS appeared to be superior to the other agents. However, there was no statistically significant difference between the groups.",
"title": ""
},
{
"docid": "4be087f37232aefa30da1da34a5e9ff5",
"text": "Many clinical studies have shown that electroencephalograms (EEG) of Alzheimer patients (AD) often have an abnormal power spectrum. In this paper a frequency band analysis of AD EEG signals is presented, with the aim of improving the diagnosis of AD from EEG signals. Relative power in different EEG frequency bands is used as features to distinguish between AD patients and healthy control subjects. Many different frequency bands between 4 and 30Hz are systematically tested, besides the traditional frequency bands, e.g., theta band (4–8Hz). The discriminative power of the resulting spectral features is assessed through statistical tests (Mann-Whitney U test). Moreover, linear discriminant analysis is conducted with those spectral features. The optimized frequency ranges (4–7Hz, 8–15Hz, 19–24Hz) yield substantially better classification performance than the traditional frequency bands (4–8Hz, 8–12Hz, 12–30Hz); the frequency band 4–7Hz is the optimal frequency range for detecting AD, which is similar to the classical theta band. The frequency bands were also optimized as features through leave-one-out crossvalidation, resulting in error-free classification. The optimized frequency bands may improve existing EEG based diagnostic tools for AD. Additional testing on larger AD datasets is required to verify the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "e1ecae98985cf87523492605bcfb468c",
"text": "This four-part series of articles provides an overview of the neurological examination of the elderly patient, particularly as it applies to patients with cognitive impairment, dementia or cerebrovascular disease.The focus is on the method and interpretation of the bedside physical examination; the mental state and cognitive examinations are not covered in this review.Part 1 (featured in the September issue) began with an approach to the neurological examination in normal aging and in disease, and reviewed components of the general physical,head and neck,neurovascular and cranial nerve examinations relevant to aging and dementia.Part 2 (featured in the October issue) covered the motor examination with an emphasis on upper motor neuron signs and movement disorders. Part 3(featured in the November issue) reviewed the assessment of coordination,balance and gait,and Part 4, featured here, discusses the muscle stretch reflexes, pathological and primitive reflexes, and sensory examination, and offers concluding remarks.Throughout this series, special emphasis is placed on the evaluation and interpretation of neurological signs in light of findings considered normal in the elderly.",
"title": ""
},
{
"docid": "25d2a1234952508c351fceb6b8d964ea",
"text": "This article provides an introduction and overview of sensory integration theory as it is used in occupational therapy practice for children with developmental disabilities. This review of the theoretical tenets of the theory, its historical foundations, and early research provides the reader with a basis for exploring current uses and applications. The key principles of the sensory integrative approach, including concepts such as \"the just right challenge\" and \"the adaptive response\" as conceptualized by A. Jean Ayres, the theory's founder, are presented to familiarize the reader with the approach. The state of research in this area is presented, including studies underway to further delineate the subtypes of sensory integrative dysfunction, the neurobiological mechanisms of poor sensory processing, advances in theory development, and the development of a fidelity measure for use in intervention studies. Finally, this article reviews the current state of the evidence to support this approach and suggests that consensual knowledge and empirical research are needed to further elucidate the theory and its utility for a variety of children with developmental disabilities. This is especially critical given the public pressure by parents of children with autism and other developmental disabilities to obtain services and who have anecdotally noted the utility of sensory integration therapy for helping their children function more independently. Key limiting factors to research include lack of funding, paucity of doctorate trained clinicians and researchers in occupational therapy, and the inherent heterogeneity of the population of children affected by sensory integrative dysfunction. A call to action for occupational therapy researchers, funding agencies, and other professions is made to support ongoing efforts and to develop initiatives that will lead to better diagnoses and effective intervention for sensory integrative dysfunction, which will improve the lives of children and their families.",
"title": ""
},
{
"docid": "4e88d4afb9a11713a7396612863e4176",
"text": "Wind turbines are typically operated to maximize their own performance without considering the impact of wake effects on nearby turbines. There is the potential to increase total power and reduce structural loads by properly coordinating the individual turbines in a wind farm. The effective design and analysis of such coordinated controllers requires turbine wake models of sufficient accuracy but low computational complexity. This paper first formulates a coordinated control problem for a two-turbine array. Next, the paper reviews several existing simulation tools that range from low-fidelity, quasi-static models to high-fidelity, computational fluid dynamic models. These tools are compared by evaluating the power, loads, and flow characteristics for the coordinated two-turbine array. The results in this paper highlight the advantages and disadvantages of existing wake models for design and analysis of coordinated wind farm controllers.",
"title": ""
},
{
"docid": "766b18cdae33d729d21d6f1b2b038091",
"text": "1.1 Terminology Intercultural communication or communication between people of different cultural backgrounds has always been and will probably remain an important precondition of human co-existance on earth. The purpose of this paper is to provide a framework of factors thatare important in intercultural communication within a general model of human, primarily linguistic, communication. The term intercultural is chosen over the largely synonymousterm cross-cultural because it is linked to language use such as “interdisciplinary”, that is cooperation between people with different scientific backgrounds. Perhaps the term also has somewhat fewer connotations than crosscultural. It is not cultures that communicate, whatever that might imply, but people (and possibly social institutions) with different cultural backgrounds that do. In general, the term”cross-cultural” is probably best used for comparisons between cultures (”crosscultural comparison”).",
"title": ""
},
{
"docid": "c4a74726ac56b0127e5920098e6f0258",
"text": "BACKGROUND\nAttention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.\n\n\nOBJECTIVES\nWe examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.\n\n\nMETHODS\nWe reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms \"ADHD, diagnosis, outcomes.\" We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.\n\n\nRESULTS\nMultiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.\n\n\nCONCLUSION\nADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder.",
"title": ""
},
{
"docid": "8f7428569e1d3036cdf4842d48b56c22",
"text": "This paper describes a unified model for role-based access control (RBAC). RBAC is a proven technology for large-scale authorization. However, lack of a standard model results in uncertainty and confusion about its utility and meaning. The NIST model seeks to resolve this situation by unifying ideas from prior RBAC models, commercial products and research prototypes. It is intended to serve as a foundation for developing future standards. RBAC is a rich and open-ended technology which is evolving as users, researchers and vendors gain experience with it. The NIST model focuses on those aspects of RBAC for which consensus is available. It is organized into four levels of increasing functional capabilities called flat RBAC, hierarchical RBAC, constrained RBAC and symmetric RBAC. These levels are cumulative and each adds exactly one new requirement. An alternate approach comprising flat and hierarchical RBAC in an ordered sequence and two unordered features—constraints and symmetry—is also presented. The paper furthermore identifies important attributes of RBAC not included in the NIST model. Some are not suitable for inclusion in a consensus document. Others require further work and agreement before standardization is feasible.",
"title": ""
},
{
"docid": "73a998535ab03730595ce5d9c1f071f7",
"text": "This article familiarizes counseling psychologists with qualitative research methods in psychology developed in the tradition of European phenomenology. A brief history includes some of Edmund Husserl’s basic methods and concepts, the adoption of existential-phenomenology among psychologists, and the development and formalization of qualitative research procedures in North America. The choice points and alternatives in phenomenological research in psychology are delineated. The approach is illustrated by a study of a recovery program for persons repeatedly hospitalized for chronic mental illness. Phenomenological research is compared with other qualitative methods, and some of its benefits for counseling psychology are identified.",
"title": ""
},
{
"docid": "7755e8c9234f950d0d5449602269e34b",
"text": "In this paper we describe a privacy-preserving method for commissioning an IoT device into a cloud ecosystem. The commissioning consists of the device proving its manufacturing provenance in an anonymous fashion without reliance on a trusted third party, and for the device to be anonymously registered through the use of a blockchain system. We introduce the ChainAnchor architecture that provides device commissioning in a privacy-preserving fashion. The goal of ChainAnchor is (i) to support anonymous device commissioning, (ii) to support device-owners being remunerated for selling their device sensor-data to service providers, and (iii) to incentivize device-owners and service providers to share sensor-data in a privacy-preserving manner.",
"title": ""
},
{
"docid": "07e2dae7b1ed0c7164e59bd31b0d3f87",
"text": "The requirement to perform complicated statistic analysis of big data by institutions of engineering, scientific research, health care, commerce, banking and computer research is immense. However, the limitations of the widely used current desktop software like R, excel, minitab and spss gives a researcher limitation to deal with big data. The big data analytic tools like IBM Big Insight, Revolution Analytics, and tableau software are commercial and heavily license. Still, to deal with big data, client has to invest in infrastructure, installation and maintenance of hadoop cluster to deploy these analytical tools. Apache Hadoop is an open source distributed computing framework that uses commodity hardware. With this project, I intend to collaborate Apache Hadoop and R software over the on the Cloud. Objective is to build a SaaS (Software-as-a-Service) analytic platform that stores & analyzes big data using open source Apache Hadoop and open source R software. The benefits of this cloud based big data analytical service are user friendliness & cost as it is developed using open-source software. The system is cloud based so users have their own space in cloud where user can store there data. User can browse data, files, folders using browser and arrange datasets. User can select dataset and analyze required dataset and store result back to cloud storage. Enterprise with a cloud environment can save cost of hardware, upgrading software, maintenance or network configuration, thus it making it more economical.",
"title": ""
},
{
"docid": "7c0328e05e30a11729bc80255e09a5b8",
"text": "This paper presents a preliminary design for a moving-target defense (MTD) for computer networks to combat an attacker's asymmetric advantage. The MTD system reasons over a set of abstract models that capture the network's configuration and its operational and security goals to select adaptations that maintain the operational integrity of the network. The paper examines both a simple (purely random) MTD system as well as an intelligent MTD system that uses attack indicators to augment adaptation selection. A set of simulation-based experiments show that such an MTD system may in fact be able to reduce an attacker's success likelihood. These results are a preliminary step towards understanding and quantifying the impact of MTDs on computer networks.",
"title": ""
},
{
"docid": "70bed43cdfd50586e803bf1a9c8b3c0a",
"text": "We design a way to model apps as vectors, inspired by the recent deep learning approach to vectorization of words called word2vec. Our method relies on how users use apps. In particular, we visualize the time series of how each user uses mobile apps as a “document”, and apply the recent word2vec modeling on these documents, but the novelty is that the training context is carefully weighted by the time interval between the usage of successive apps. This gives us the app2vec vectorization of apps. We apply this to industrial scale data from Yahoo! and (a) show examples that app2vec captures semantic relationships between apps, much as word2vec does with words, (b) show using Yahoo!'s extensive human evaluation system that 82% of the retrieved top similar apps are semantically relevant, achieving 37% lift over bag-of-word approach and 140% lift over matrix factorization approach to vectorizing apps, and (c) finally, we use app2vec to predict app-install conversion and improve ad conversion prediction accuracy by almost 5%. This is the first industry scale design, training and use of app vectorization.",
"title": ""
}
] |
scidocsrr
|
cf16b8c55bbe6d9614987461925f2800
|
Fast Scene Understanding for Autonomous Driving
|
[
{
"docid": "7af26168ae1557d8633a062313d74b78",
"text": "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.",
"title": ""
},
{
"docid": "b8b73a2f4924aaa34cf259d0f5eca3ba",
"text": "Semantic segmentation and object detection research have recently achieved rapid progress. However, the former task has no notion of different instances of the same object, and the latter operates at a coarse, bounding-box level. We propose an Instance Segmentation system that produces a segmentation map where each pixel is assigned an object class and instance identity label. Most approaches adapt object detectors to produce segments instead of boxes. In contrast, our method is based on an initial semantic segmentation module, which feeds into an instance subnetwork. This subnetwork uses the initial category-level segmentation, along with cues from the output of an object detector, within an end-to-end CRF to predict instances. This part of our model is dynamically instantiated to produce a variable number of instances per image. Our end-to-end approach requires no post-processing and considers the image holistically, instead of processing independent proposals. Therefore, unlike some related work, a pixel cannot belong to multiple instances. Furthermore, far more precise segmentations are achieved, as shown by our substantial improvements at high APr thresholds.",
"title": ""
},
{
"docid": "5f4d10a1a180f6af3d35ca117cd4ee19",
"text": "This work addresses the task of instance-aware semantic segmentation. Our key motivation is to design a simple method with a new modelling-paradigm, which therefore has a different trade-off between advantages and disadvantages compared to known approaches. Our approach, we term InstanceCut, represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard convolutional neural network for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation. We evaluate our approach on the challenging CityScapes dataset. Despite the conceptual simplicity of our approach, we achieve the best result among all published methods, and perform particularly well for rare object classes.",
"title": ""
}
] |
[
{
"docid": "42c6ec7e27bc1de6beceb24d52b7216c",
"text": "Internet of Things (IoT) refers to the expansion of Internet technologies to include wireless sensor networks (WSNs) and smart objects by extensive interfacing of exclusively identifiable, distributed communication devices. Due to the close connection with the physical world, it is an important requirement for IoT technology to be self-secure in terms of a standard information security model components. Autonomic security should be considered as a critical priority and careful provisions must be taken in the design of dynamic techniques, architectures and self-sufficient frameworks for future IoT. Over the years, many researchers have proposed threat mitigation approaches for IoT and WSNs. This survey considers specific approaches requiring minimal human intervention and discusses them in relation to self-security. This survey addresses and brings together a broad range of ideas linked together by IoT, autonomy and security. More particularly, this paper looks at threat mitigation approaches in IoT using an autonomic taxonomy and finally sets down future directions. & 2014 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "1bf69a2bffe2652e11ff8ec7f61b7c0d",
"text": "This research proposes and validates a design theory for digital platforms that support online communities (DPsOC). It addresses ways in which digital platforms can effectively support social interactions in online communities. Drawing upon prior literature on IS design theory, online communities, and platforms, we derive an initial set of propositions for designing effective DPsOC. Our overarching proposition is that three components of digital platform architecture (core, interface, and complements) should collectively support the mix of the three distinct types of social interaction structures of online community (information sharing, collaboration, and collective action). We validate the initial propositions and generate additional insights by conducting an in-depth analysis of an European digital platform for elderly care assistance. We further validate the propositions by analyzing three widely used digital platforms, including Twitter, Wikipedia, and Liquidfeedback, and we derive additional propositions and insights that can guide DPsOC design. We discuss the implications of this research for research and practice. Journal of Information Technology advance online publication, 10 February 2015; doi:10.1057/jit.2014.37",
"title": ""
},
{
"docid": "45494f14c2d9f284dd3ad3a5be49ca78",
"text": "Developing segmentation techniques for overlapping cells has become a major hurdle for automated analysis of cervical cells. In this paper, an automated three-stage segmentation approach to segment the nucleus and cytoplasm of each overlapping cell is described. First, superpixel clustering is conducted to segment the image into small coherent clusters that are used to generate a refined superpixel map. The refined superpixel map is passed to an adaptive thresholding step to initially segment the image into cellular clumps and background. Second, a linear classifier with superpixel-based features is designed to finalize the separation between nuclei and cytoplasm. Finally, edge and region based cell segmentation are performed based on edge enhancement process, gradient thresholding, morphological operations, and region properties evaluation on all detected nuclei and cytoplasm pairs. The proposed framework has been evaluated using the ISBI 2014 challenge dataset. The dataset consists of 45 synthetic cell images, yielding 270 cells in total. Compared with the state-of-the-art approaches, our approach provides more accurate nuclei boundaries, as well as successfully segments most of overlapping cells.",
"title": ""
},
{
"docid": "ba36f2cabea51ed99621a7aa104fed08",
"text": "Plant identification and classification play an important role in ecology, but the manual process is cumbersome even for experimented taxonomists. Technological advances allows the development of strategies to make these tasks easily and faster. In this context, this paper describes a methodology for plant identification and classification based on leaf shapes, that explores the discriminative power of the contour-centroid distance in the Fourier frequency domain in which some invariance (e.g. Rotation and scale) are guaranteed. In addition, it is also investigated the influence of feature selection techniques regarding classification accuracy. Our results show that by combining a set of features vectors - in the principal components space - and a feed forward neural network, an accuracy of 97.45% was achieved.",
"title": ""
},
{
"docid": "6761bd757cdd672f60c980b081d4dbc8",
"text": "Real-time eye and iris tracking is important for handsoff gaze-based password entry, instrument control by paraplegic patients, Internet user studies, as well as homeland security applications. In this project, a smart camera, LabVIEW and vision software tools are utilized to generate eye detection and tracking algorithms. The algorithms are uploaded to the smart camera for on-board image processing. Eye detection refers to finding eye features in a single frame. Eye tracking is achieved by detecting the same eye features across multiple image frames and correlating them to a particular eye. The algorithms are tested for eye detection and tracking under different conditions including different angles of the face, head motion speed, and eye occlusions to determine their usability for the proposed applications. This paper presents the implemented algorithms and performance results of these algorithms on the smart camera.",
"title": ""
},
{
"docid": "84e0b31ca5cbd158a673f59019d3ace3",
"text": "This paper presents a compact bi-directional solidstate transformer (SST) based on a current-source topology, featuring high-frequency galvanic isolation and only two minimal power conversion stages. The topology, referenced as Dynamic Current or Dyna-C, can be configured for multiterminal DC and/or multi-phase AC applications. The Dyna-C is robust, can be stacked for MV applications, and be paralleled for high current and high power applications. The paper will present some of the possible configurations of Dyna-C, and will discuss challenges associated with the control and operation. One core innovation presented in this paper is the management of transformer leakage energy when transitioning from one bridge to another while maintaining low device stresses and losses. Simulation and experimental results are used to validate operation of the topology.",
"title": ""
},
{
"docid": "361511f6c0e068442cd12377b9c3c9a6",
"text": "Machine learning methods are widely used for a variety of prediction problems. Prediction as a service is a paradigm in which service providers with technological expertise and computational resources may perform predictions for clients. However, data privacy severely restricts the applicability of such services, unless measures to keep client data private (even from the service provider) are designed. Equally important is to minimize the amount of computation and communication required between client and server. Fully homomorphic encryption offers a possible way out, whereby clients may encrypt their data, and on which the server may perform arithmetic computations. The main drawback of using fully homomorphic encryption is the amount of time required to evaluate large machine learning models on encrypted data. We combine ideas from the machine learning literature, particularly work on binarization and sparsification of neural networks, together with algorithmic tools to speed-up and parallelize computation using encrypted data.",
"title": ""
},
{
"docid": "6b1dc94c4c70e1c78ea32a760b634387",
"text": "3d reconstruction from a single image is inherently an ambiguous problem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of singleimage depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each other. In this paper, we focus on the problem of automatic 3d reconstruction of indoor scenes, specifically ones (sometimes called “Manhattan worlds”) that consist mainly of orthogonal planes. We use a Markov random field (MRF) model to identify the different planes and edges in the scene, as well as their orientations. Then, an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction. Our approach is fully automatic—given an input image, no human intervention is necessary to obtain an approximate 3d reconstruction.",
"title": ""
},
{
"docid": "32764726652b5f95aa2d208f80e967c0",
"text": "Simulation is a technique-not a technology-to replace or amplify real experiences with guided experiences that evoke or replicate substantial aspects of the real world in a fully interactive manner. The diverse applications of simulation in healthcare can be categorized by 11 dimensions: aims and purposes of the simulation activity; unit of participation; experience level of participants; healthcare domain; professional discipline of participants; type of knowledge, skill, attitudes, or behaviors addressed; the simulated patient's age; technology applicable or required; site of simulation; extent of direct participation; and method of feedback used. Using simulation to improve safety will require full integration of its applications into the routine structures and practices of healthcare. The costs and benefits of simulation are difficult to determine, especially for the most challenging applications, where long-term use may be required. Various driving forces and implementation mechanisms can be expected to propel simulation forward, including professional societies, liability insurers, healthcare payers, and ultimately the public. The future of simulation in healthcare depends on the commitment and ingenuity of the healthcare simulation community to see that improved patient safety using this tool becomes a reality.",
"title": ""
},
{
"docid": "84307c2dd94ebe89c46a535b31b4b51b",
"text": "Building systems that autonomously create temporal abstractions from data is a key challenge in scaling learning and planning in reinforcement learning. One popular approach for addressing this challenge is the options framework [41]. However, only recently in [1] was a policy gradient theorem derived for online learning of general purpose options in an end to end fashion. In this work, we extend previous work on this topic that only focuses on learning a two-level hierarchy including options and primitive actions to enable learning simultaneously at multiple resolutions in time. We achieve this by considering an arbitrarily deep hierarchy of options where high level temporally extended options are composed of lower level options with finer resolutions in time. We extend results from [1] and derive policy gradient theorems for a deep hierarchy of options. Our proposed hierarchical option-critic architecture is capable of learning internal policies, termination conditions, and hierarchical compositions over options without the need for any intrinsic rewards or subgoals. Our empirical results in both discrete and continuous environments demonstrate the efficiency of our framework.",
"title": ""
},
{
"docid": "fcf0ac3b52a1db116463e7376dae4950",
"text": "Although the ability to perform complex cognitive operations is assumed to be impaired following acute marijuana smoking, complex cognitive performance after acute marijuana use has not been adequately assessed under experimental conditions. In the present study, we used a within-participant double-blind design to evaluate the effects acute marijuana smoking on complex cognitive performance in experienced marijuana smokers. Eighteen healthy research volunteers (8 females, 10 males), averaging 24 marijuana cigarettes per week, completed this three-session outpatient study; sessions were separated by at least 72-hrs. During sessions, participants completed baseline computerized cognitive tasks, smoked a single marijuana cigarette (0%, 1.8%, or 3.9% Δ9-THC w/w), and completed additional cognitive tasks. Blood pressure, heart rate, and subjective effects were also assessed throughout sessions. Marijuana cigarettes were administered in a double-blind fashion and the sequence of Δ9-THC concentration order was balanced across participants. Although marijuana significantly increased the number of premature responses and the time participants required to complete several tasks, it had no effect on accuracy on measures of cognitive flexibility, mental calculation, and reasoning. Additionally, heart rate and several subjective-effect ratings (e.g., “Good Drug Effect,” “High,” “Mellow”) were significantly increased in a Δ9-THC concentration-dependent manner. These data demonstrate that acute marijuana smoking produced minimal effects on complex cognitive task performance in experienced marijuana users.",
"title": ""
},
{
"docid": "32e430c84b64d123763ed2e034696e20",
"text": "The Internet of Things (IoT) is becoming a key infrastructure for the development of smart ecosystems. However, the increased deployment of IoT devices with poor security has already rendered them increasingly vulnerable to cyber attacks. In some cases, they can be used as a tool for committing serious crimes. Although some researchers have already explored such issues in the IoT domain and provided solutions for them, there remains the need for a thorough analysis of the challenges, solutions, and open problems in this domain. In this paper, we consider this research gap and provide a systematic analysis of security issues of IoT-based systems. Then, we discuss certain existing research projects to resolve the security issues. Finally, we highlight a set of open problems and provide a detailed description for each. We posit that our systematic approach for understanding the nature and challenges in IoT security will motivate researchers to addressing and solving these problems.",
"title": ""
},
{
"docid": "48d2f38037b0cab83ca4d57bf19ba903",
"text": "The term sentiment analysis can be used to refer to many different, but related, problems. Most commonly, it is used to refer to the task of automatically determining the valence or polarity of a piece of text, whether it is positive, negative, or neutral. However, more generally, it refers to determining one’s attitude towards a particular target or topic. Here, attitude can mean an evaluative judgment, such as positive or negative, or an emotional or affectual attitude such as frustration, joy, anger, sadness, excitement, and so on. Note that some authors consider feelings to be the general category that includes attitude, emotions, moods, and other affectual states. In this chapter, we use ‘sentiment analysis’ to refer to the task of automatically determining feelings from text, in other words, automatically determining valence, emotions, and other affectual states from text. Osgood, Suci, and Tannenbaum (1957) showed that the three most prominent dimensions of meaning are evaluation (good–bad), potency (strong–weak), and activity (active– passive). Evaluativeness is roughly the same dimension as valence (positive–negative). Russell (1980) developed a circumplex model of affect characterized by two primary dimensions: valence and arousal (degree of reactivity to stimulus). Thus, it is not surprising that large amounts of work in sentiment analysis are focused on determining valence. (See survey articles by Pang and Lee (2008), Liu and Zhang (2012), and Liu (2015).) However, there is some work on automatically detecting arousal (Thelwall, Buckley, Paltoglou, Cai, & Kappas, 2010; Kiritchenko, Zhu, & Mohammad, 2014b; Mohammad, Kiritchenko, & Zhu, 2013a) and growing interest in detecting emotions such as anger, frustration, sadness, and optimism in text (Mohammad, 2012; Bellegarda, 2010; Tokuhisa, Inui, & Matsumoto, 2008; Strapparava & Mihalcea, 2007; John, Boucouvalas, & Xu, 2006; Mihalcea & Liu, 2006; Genereux & Evans, 2006; Ma, Prendinger, & Ishizuka, 2005; Holzman & Pottenger, 2003; Boucouvalas, 2002; Zhe & Boucouvalas, 2002). Further, massive amounts of data emanating from social media have led to significant interest in analyzing blog posts, tweets, instant messages, customer reviews, and Facebook posts for both valence (Kiritchenko et al., 2014b; Kiritchenko, Zhu, Cherry, & Mohammad, 2014a; Mohammad et al., 2013a; Aisopos, Papadakis, Tserpes, & Varvarigou, 2012; Bakliwal, Arora, Madhappan, Kapre, Singh, & Varma, 2012; Agarwal, Xie, Vovsha, Rambow, & Passonneau, 2011; Thelwall, Buckley, & Paltoglou, 2011; Brody & Diakopoulos, 2011; Pak & Paroubek, 2010) and emotions (Hasan, Rundensteiner, & Agu, 2014; Mohammad & Kiritchenko, 2014; Mohammad, Zhu, Kiritchenko, & Martin, 2014; Choudhury, Counts, & Gamon, 2012; Mohammad, 2012a; Wang, Chen, Thirunarayan, & Sheth, 2012; Tumasjan, Sprenger, Sandner, & Welpe, 2010b; Kim, Gilbert, Edwards, &",
"title": ""
},
{
"docid": "b7600e8798f867fb267cfdd9129948c7",
"text": "In this paper, we consider an interesting vision problem—salient instance segmentation. Other than producing approximate bounding boxes, our network also outputs high-quality instance-level segments. Taking into account the category-independent property of each target, we design a single stage salient instance segmentation framework, with a novel segmentation branch. Our new branch regards not only local context inside each detection window but also its surrounding context, enabling us to distinguish the instances in the same scope even with obstruction. Our network is end-to-end trainable and runs at a fast speed (40 fps when processing an image with resolution 320 × 320). We evaluate our approach on a public available benchmark and show that it outperforms other alternative solutions. In addition, we also provide a thorough analysis of the design choices to help readers better understand the functions of each part in our network. To facilitate the development of this area, our code will be available at https://github.com/RuochenFan/S4Net.",
"title": ""
},
{
"docid": "734638df47b05b425b0dcaaab11d886e",
"text": "Satisfying the needs of users of online video streaming services requires not only to manage the network Quality of Service (QoS), but also to address the user's Quality of Experience (QoE) expectations. While QoS factors reflect the status of individual networks, they do not comprehensively capture the end-to-end features affecting the quality delivered to the user. In this situation, QoE management is the better option. However, traditionally used QoE management models require human interaction and have stringent requirements in terms of time and complexity. Thus, they fail to achieve successful performance in terms of real-timeliness, accuracy, scalability and adaptability. This dissertation work investigates new methods to bring QoE management to the level required by the real-time management of video services. In this paper, we highlight our main contributions. First, with the aim to perform a combined network-service assessment, we designed an experimental methodology able to map network QoS onto service QoE. Our methodology is meant to provide service and network providers with the means to pinpoint the working boundaries of their video-sets and to predict the effect of network policies on perception. Second, we developed a generic machine learning framework that allows deriving accurate predictive No Reference (NR) assessment metrics, based on simplistic NR QoE methods, that are functionally and computationally viable for real-time QoE evaluation. The tools, methods and conclusions derived from this dissertation conform a solid contribution to QoE management of video streaming services, opening new venues for further research.",
"title": ""
},
{
"docid": "05a788c8387e58e59e8345f343b4412a",
"text": "We deal with the problem of recognizing social roles played by people in an event. Social roles are governed by human interactions, and form a fundamental component of human event description. We focus on a weakly supervised setting, where we are provided different videos belonging to an event class, without training role labels. Since social roles are described by the interaction between people in an event, we propose a Conditional Random Field to model the inter-role interactions, along with person specific social descriptors. We develop tractable variational inference to simultaneously infer model weights, as well as role assignment to all people in the videos. We also present a novel YouTube social roles dataset with ground truth role annotations, and introduce annotations on a subset of videos from the TRECVID-MED11 [1] event kits for evaluation purposes. The performance of the model is compared against different baseline methods on these datasets.",
"title": ""
},
{
"docid": "ae454338771f068e2b8a1f475855de11",
"text": "For powder-bed electron beam additive manufacturing (EBAM), support structures are required when fabricating an overhang to prevent defects such as curling, which is due to the complex thermomechanical process in EBAM. In this study, finite element modeling is developed to simulate the thermomechanical process in EBAM in building overhang part. Thermomechanical characteristics such as thermal gradients and thermal stresses around the overhang build are evaluated and analyzed. The model is applied to evaluate process parameter effects on the severity of thermal stresses. The major results are summarized as follows. For a uniform set of process parameters, the overhang areas have a higher maximum temperature, a higher tensile stress, and a larger distortion than the areas above a solid substrate. A higher energy density input, e.g., a lower beam speed or a higher beam current may cause more severe curling at the overhang area.",
"title": ""
},
{
"docid": "6dbfefb384a3dbd28beee2d0daebae52",
"text": "Many NLP applications require disambiguating polysemous words. Existing methods that learn polysemous word vector representations involve first detecting various senses and optimizing the sensespecific embeddings separately, which are invariably more involved than single sense learning methods such as word2vec. Evaluating these methods is also problematic, as rigorous quantitative evaluations in this space is limited, especially when compared with single-sense embeddings. In this paper, we propose a simple method to learn a word representation, given any context. Our method only requires learning the usual single sense representation, and coefficients that can be learnt via a single pass over the data. We propose several new test sets for evaluating word sense induction, relevance detection, and contextual word similarity, significantly supplementing the currently available tests. Results on these and other tests show that while our method is embarrassingly simple, it achieves excellent results when compared to the state of the art models for unsupervised polysemous word representation learning. Our code and data are at https://github.com/dingwc/",
"title": ""
},
{
"docid": "9d7a67f2cd12a6fd033ad102fb9c526e",
"text": "We begin by pretraining the source task model, fS , using the task loss on the labeled source data. Next, we perform pixel-level adaptation using our image space GAN losses together with semantic consistency and cycle consistency losses. This yeilds learned parameters for the image transformations, GS!T and GT!S , image discriminators, DS and DT , as well as an initial setting of the task model, fT , which is trained using pixel transformed source images and the corresponding source pixel labels. Finally, we perform feature space adpatation in order to update the target semantic model, fT , to have features which are aligned between the source images mapped into target style and the real target images. During this phase, we learn the feature discriminator, Dfeat and use this to guide the representation update to fT . In general, our method could also perform phases 2 and 3 simultaneously, but this would require more GPU memory then available at the time of these experiments.",
"title": ""
},
{
"docid": "bf7e67dededd5f4585aaefecc60e7c1a",
"text": "Multidimensional long short-term memory recurrent neural networks achieve impressive results for handwriting recognition. However, with current CPU-based implementations, their training is very expensive and thus their capacity has so far been limited. We release an efficient GPU-based implementation which greatly reduces training times by processing the input in a diagonal-wise fashion. We use this implementation to explore deeper and wider architectures than previously used for handwriting recognition and show that especially the depth plays an important role. We outperform state of the art results on two databases with a deep multidimensional network.",
"title": ""
}
] |
scidocsrr
|
1483bb0c391bd654416b1079bb86a79b
|
Smoke detection using spatial and temporal analyses
|
[
{
"docid": "70e88fe5fc43e0815a1efa05e17f7277",
"text": "Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas. Many commercial smoke detection sensors exist but most of them cannot be applied in open space or outdoor scenarios. With this aim, the paper presents a smoke detection system that uses a common CCD camera sensor to detect smoke in images and trigger alarms. First, a proper background model is proposed to reliably extract smoke regions and avoid over-segmentation and false positives in outdoor scenarios where many distractors are present, such as moving trees or light reflexes. A novel Bayesian approach is adopted to detect smoke regions in the scene analyzing image energy by means of the Wavelet Transform coefficients and Color Information. A statistical model of image energy is built, using a temporal Gaussian Mixture, to analyze the energy decay that typically occurs when smoke covers the scene then the detection is strengthen evaluating the color blending between a reference smoke color and the input frame. The proposed system is capable of detecting rapidly smoke events both in night and in day conditions with a reduced number of false alarms hence is particularly suitable for monitoring large outdoor scenarios where common sensors would fail. An extensive experimental campaign both on recorded videos and live cameras evaluates the efficacy and efficiency of the system in many real world scenarios, such as outdoor storages and forests.",
"title": ""
}
] |
[
{
"docid": "e597f9fbd0d42066b991c6e917a1e767",
"text": "While Open Data initiatives are diverse, they aim to create and contribute to public value. Yet several potential contradictions exist between public values, such as trust, transparency, privacy, and security, and Open Data policies. To bridge these contradictions, we present the notion of precommitment as a restriction of one’s choices. Conceptualized as a policy instrument, precommitment can be applied by an organization to restrict the extent to which an Open Data policy might conflict with public values. To illustrate the use of precommitment, we present two case studies at two public sector organizations, where precommitment is applied during a data request procedure to reconcile conflicting values. In this procedure, precommitment is operationalized in three phases. In the first phase, restrictions are defined on the type and the content of the data that might be requested. The second phase involves the preparation of the data to be delivered according to legal requirements and the decisions taken in phase 1. Data preparation includes amongst others the deletion of privacy sensitive or other problematic attributes. Finally, phase 3 pertains to the establishment of the conditions of reuse of the data, limiting the use to restricted user groups or opening the data for everyone.",
"title": ""
},
{
"docid": "e0fae6d662cdeb4815ed29a828747491",
"text": "In this paper, a novel framework is developed to achieve effective summarization of large-scale image collection by treating the problem of automatic image summarization as the problem of dictionary learning for sparse representation, e.g., the summarization task can be treated as a dictionary learning task (i.e., the given image set can be reconstructed sparsely with this dictionary). For image set of a specific category or a mixture of multiple categories, we have built a sparsity model to reconstruct all its images by using a subset of most representative images (i.e., image summary); and we adopted the simulated annealing algorithm to learn such sparse dictionary by minimizing an explicit optimization function. By investigating their reconstruction ability under sparsity constrain and diversity constrain, we have quantitatively measure the performance of various summarization algorithms. Our experimental results have shown that our dictionary learning for sparse representation algorithm can obtain more accurate summary as compared with other baseline algorithms.",
"title": ""
},
{
"docid": "87e56672751a8eb4d5a08f0459e525ca",
"text": "— The Internet of Things (IoT) has transformed many aspects of modern manufacturing, from design to production to quality control. In particular, IoT and digital manufacturing technologies have substantially accelerated product development cycles and manufacturers can now create products of a complexity and precision not heretofore possible. New threats to supply chain security have arisen from connecting machines to the Internet and introducing complex IoT-based systems controlling manufacturing processes. By attacking these IoT-based manufacturing systems and tampering with digital files, attackers can manipulate physical characteristics of parts and change the dimensions, shapes, or mechanical properties of the parts, which can result in parts that fail in the field. These defects increase manufacturing costs and allow silent problems to occur only under certain loads that can threaten safety and/or lives. To understand potential dangers and protect manufacturing system safety, this paper presents two taxonomies: one for classifying cyber-physical attacks against manufacturing processes and another for quality control measures for counteracting these attacks. We systematically identify and classify possible cyber-physical attacks and connect the attacks with variations in manufacturing processes and quality control measures. Our tax-onomies also provide a scheme for linking emerging IoT-based manufacturing system vulnerabilities to possible attacks and quality control measures.",
"title": ""
},
{
"docid": "d1444f26cee6036f1c2df67a23d753be",
"text": "Text mining has becoming an emerging research area in now-a-days that helps to extract useful information from large amount of natural language text documents. The need of grouping similar documents together for different applications has gaining the attention of researchers in this area. Document clustering organizes the documents into different groups called as clusters. The documents in one cluster have higher degree of similarity than the documents in other cluster. The paper provides an overview of the document clustering reviewed from different papers and the challenges in document clustering. KeywordsText Mining, Document Clustering, Similarity Measures, Challenges in Document Clustering",
"title": ""
},
{
"docid": "027fca90352f826948d2d42bbeb6c863",
"text": "Inspired by the theory of Leitner’s learning box from the field of psychology, we propose DropSample, a new method for training deep convolutional neural networks (DCNNs), and apply it to large-scale online handwritten Chinese character recognition (HCCR). According to the principle of DropSample, each training sample is associated with a quota function that is dynamically adjusted on the basis of the classification confidence given by the DCNN softmax output. After a learning iteration, samples with low confidence will have a higher probability of being selected as training data in the next iteration; in contrast, well-trained and well-recognized samples with very high confidence will have a lower probability of being involved in the next training iteration and can be gradually eliminated. As a result, the learning process becomes more efficient as it progresses. Furthermore, we investigate the use of domain-specific knowledge to enhance the performance of DCNN by adding a domain knowledge layer before the traditional CNN. By adopting DropSample together with different types of domain-specific knowledge, the accuracy of HCCR can be improved efficiently. Experiments on the CASIA-OLHDWB 1.0, CASIA-OLHWDB 1.1, and ICDAR 2013 online HCCR competition datasets yield outstanding recognition rates of 97.33%, 97.06%, and 97.51% respectively, all of which are significantly better than the previous best results reported in the literature.",
"title": ""
},
{
"docid": "eed788297c1b49895f8f19012b6231f2",
"text": "Can the choice of words and tone used by the authors of financial news articles correlate to measurable stock price movements? If so, can the magnitude of price movement be predicted using these same variables? We investigate these questions using the Arizona Financial Text (AZFinText) system, a financial news article prediction system, and pair it with a sentiment analysis tool. Through our analysis, we found that subjective news articles were easier to predict in price direction (59.0% versus 50.0% of chance alone) and using a simple trading engine, subjective articles garnered a 3.30% return. Looking further into the role of author tone in financial news articles, we found that articles with a negative sentiment were easiest to predict in price direction (50.9% versus 50.0% of chance alone) and a 3.04% trading return. Investigating negative sentiment further, we found that our system was able to predict price decreases in articles of a positive sentiment 53.5% of the time, and price increases in articles of a negative",
"title": ""
},
{
"docid": "0dd334ac819bfb77094e06dc0c00efee",
"text": "How to propagate label information from labeled examples to unlabeled examples over a graph has been intensively studied for a long time. Existing graph-based propagation algorithms usually treat unlabeled examples equally, and transmit seed labels to the unlabeled examples that are connected to the labeled examples in a neighborhood graph. However, such a popular propagation scheme is very likely to yield inaccurate propagation, because it falls short of tackling ambiguous but critical data points (e.g., outliers). To this end, this paper treats the unlabeled examples in different levels of difficulties by assessing their reliability and discriminability, and explicitly optimizes the propagation quality by manipulating the propagation sequence to move from simple to difficult examples. In particular, we propose a novel iterative label propagation algorithm in which each propagation alternates between two paradigms, teaching-to-learn and learning-to-teach (TLLT). In the teaching-to-learn step, the learner conducts the propagation on the simplest unlabeled examples designated by the teacher. In the learning-to-teach step, the teacher incorporates the learner’s feedback to adjust the choice of the subsequent simplest examples. The proposed TLLT strategy critically improves the accuracy of label propagation, making our algorithm substantially robust to the values of tuning parameters, such as the Gaussian kernel width used in graph construction. The merits of our algorithm are theoretically justified and empirically demonstrated through experiments performed on both synthetic and real-world data sets.",
"title": ""
},
{
"docid": "27c6fa2e390fe1cbe1a47b9ef6667d35",
"text": "In this paper, we present a comprehensive study on supervised domain adaptation of PLDA based i-vector speaker recognition systems. After describing the system parameters subject to adaptation, we study the impact of their adaptation on recognition performance. Using the recently designed domain adaptation challenge, we observe that the adaptation of the PLDA parameters (i.e. across-class and within-class co variances) produces the largest gains. Nonetheless, length-normalization is also important; whereas using an indomani UBM and T matrix is not crucial. For the PLDA adaptation, we compare four approaches. Three of them are proposed in this work, and a fourth one was previously published. Overall, the four techniques are successful at leveraging varying amounts of labeled in-domain data and their performance is quite similar. However, our approaches are less involved, and two of them are applicable to a larger class of models (low-rank across-class).",
"title": ""
},
{
"docid": "67070d149bcee51cc93a81f21f15ad71",
"text": "As an important and fundamental tool for analyzing the schedulability of a real-time task set on the multiprocessor platform, response time analysis (RTA) has been researched for several years on both Global Fixed Priority (G-FP) and Global Earliest Deadline First (G-EDF) scheduling. This paper proposes a new analysis that improves over current state-of-the-art RTA methods for both G-FP and G-EDF scheduling, by reducing their pessimism. The key observation is that when estimating the carry-in workload, all the existing RTA techniques depend on the worst case scenario in which the carry-in job should execute as late as possible and just finishes execution before its worst case response time (WCRT). But the carry-in workload calculated under this assumption may be over-estimated, and thus the accuracy of the response time analysis may be impacted. To address this problem, we first propose a new method to estimate the carry-in workload more precisely. The proposed method does not depend on any specific scheduling algorithm and can be used for both G-FP and G-EDF scheduling. We then propose a general RTA algorithm that can improve most existing RTA tests by incorporating our carry-in estimation method. To further improve the execution efficiency, we also introduce an optimization technique for our RTA tests. Experiments with randomly generated task sets are conducted and the results show that, compared with the state-of-the-art technologies, the proposed tests exhibit considerable performance improvements, up to 9 and 7.8 percent under G-FP and G-EDF scheduling respectively, in terms of schedulability test precision.",
"title": ""
},
{
"docid": "2f9de2e94c6af95e9c2e9eb294a7696c",
"text": "The rapid growth of Electronic Health Records (EHRs), as well as the accompanied opportunities in Data-Driven Healthcare (DDH), has been attracting widespread interests and attentions. Recent progress in the design and applications of deep learning methods has shown promising results and is forcing massive changes in healthcare academia and industry, but most of these methods rely on massive labeled data. In this work, we propose a general deep learning framework which is able to boost risk prediction performance with limited EHR data. Our model takes a modified generative adversarial network namely ehrGAN, which can provide plausible labeled EHR data by mimicking real patient records, to augment the training dataset in a semi-supervised learning manner. We use this generative model together with a convolutional neural network (CNN) based prediction model to improve the onset prediction performance. Experiments on two real healthcare datasets demonstrate that our proposed framework produces realistic data samples and achieves significant improvements on classification tasks with the generated data over several stat-of-the-art baselines.",
"title": ""
},
{
"docid": "5398b76e55bce3c8e2c1cd89403b8bad",
"text": "To cite: He A, Kwatra SG, Kazi N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016215335 DESCRIPTION A woman aged 45 years presented for evaluation of skin lesions. She reported an 8–9-year history of occasionally tender, waxing-and-waning skin nodules refractory to dapsone, prednisone and methotrexate. Examination revealed multiple indurated subcutaneous nodules distributed on the upper extremities, with scattered patches of lipoatrophy in areas of nodule regression (figure 1). Her medical history was unremarkable; CBC and CMP were within normal limits, with no history of radiotherapy or evidence of internal organ involvement. She had a positive ANA titre (1:160, speckled), but negative anti-dsDNA, anti-Smith, anti-Ro and anti-La antibodies. Differential diagnosis included erythema nodosum (EN), erythema induratum of Bazin (EIB), lupus profundus (LP) and cutaneous lymphoma. Initial wedge biopsy in 2008 disclosed a predominantly lobular panniculitic process with some septal involvement (figure 2A). Broad zones of necrosis were present (figure 2B). The infiltrate consisted of a pleomorphic population of lymphocytes with occasional larger atypical lymphocytes (figure 2C). There were foci of adipocyte rimming by the atypical lymphocytes (figure 2C). Immunophenotyping revealed predominance of CD3+ T cells with some CD20+ B-cell aggregates. The atypical cells stained CD4 and CD8 in approximately equal ratios. TIA-1 was positive in many of the atypical cells but not prominently enough to render a diagnosis of cytotoxic T-cell lymphoma. T-cell receptor PCR studies showed polyclonality. Subsequent biopsies performed annually after treatment with prednisone in 2008 and 2010, dapsone in 2009 and methotrexate in 2012 showed very similar pathological and molecular features. Adipocyte rimming and TCR polyclonality persisted. EN is characterised by subcutaneous nodules on the lower extremities in association with elevated erythrocyte sedimentation rate (ESR) and C reactive protein (CRP), influenza-like prodrome preceding nodule formation and self-limiting course. Histologically, EN shows a mostly septal panniculitis with radial granulomas. EN was ruled out on the basis of normal ESR (6) and CRP (<0.1), chronic relapsing course and predominantly lobular panniculitis process histologically. EIB typically presents with violaceous nodules located on the posterior lower extremities, with arms rarely affected, of patients with a history of tuberculosis (TB). Histologically, EIB shows granulomatous inflammation with focal necrosis, vasculitis and septal fibrosis. Our patient had no evidence or history of TB infection and presented with nodules of a different clinical morphology. Ultimately, this constellation of histological and immunophenotypic findings showed an atypical panniculitic T-lymphocytic infiltrate. Although the lesion showed a lobular panniculitis with features that could be seen in subcutaneous panniculitis-like T-cell lymphoma (SPTCL), the presence of plasma cells, absence of CD8 and TIA restriction and T-cell polyclonality did not definitively support that",
"title": ""
},
{
"docid": "9e9be149fc44552b6ac9eb2d90d4a4ba",
"text": "In this work, a level set energy for segmenting the lungs from digital Posterior-Anterior (PA) chest x-ray images is presented. The primary challenge in using active contours for lung segmentation is local minima due to shading effects and presence of strong edges due to the rib cage and clavicle. We have used the availability of good contrast at the lung boundaries to extract a multi-scale set of edge/corner feature points and drive our active contour model using these features. We found these features when supplemented with a simple region based data term and a shape term based on the average lung shape, able to handle the above local minima issues. The algorithm was tested on 1130 clinical images, giving promising results.",
"title": ""
},
{
"docid": "a00cc13a716439c75a5b785407b02812",
"text": "A novel current feedback programming principle and circuit architecture are presented, compatible with LED displays utilizing the 2T1C pixel structure. The new pixel programming approach is compatible with all TFT backplane technologies and can compensate for non-uniformities in both threshold voltage and carrier mobility of the OLED pixel drive TFT, due to a feedback loop that modulates the gate of the driving transistor according to the OLED current. The circuit can be internal or external to the integrated display data driver. Based on simulations and data gathered through a fabricated prototype driver, a pixel drive current of 20 nA can be programmed within an addressing time ranging from 10 μs to 50 μs.",
"title": ""
},
{
"docid": "f584b2d89bacacf31158496460d6f546",
"text": "Significant advances in clinical practice as well as basic and translational science were presented at the American Transplant Congress this year. Topics included innovative clinical trials to recent advances in our basic understanding of the scientific underpinnings of transplant immunology. Key areas of interest included the following: clinical trials utilizing hepatitis C virus-positive (HCV+ ) donors for HCV- recipients, the impact of the new allocation policies, normothermic perfusion, novel treatments for desensitization, attempts at precision medicine, advances in xenotransplantation, the role of mitochondria and exosomes in rejection, nanomedicine, and the impact of the microbiota on transplant outcomes. This review highlights some of the most interesting and noteworthy presentations from the meeting.",
"title": ""
},
{
"docid": "8ea6c2f9ee972ef321e12b26dd1f9022",
"text": "This paper describes a simultaneous localization and mapping (SLAM) algorithm for use in unstructured environments that is effective regardless of the geometric complexity of the environment. Features are described using B-splines as modeling tool, and the set of control points defining their shape is used to form a complete and compact description of the environment, thus making it feasible to use an extended Kalman-filter (EKF) based SLAM algorithm. This method is the first known EKF-SLAM implementation capable of describing general free-form features in a parametric manner. Efficient strategies for computing the relevant Jacobians, perform data association, initialization, and map enlargement are presented. The algorithms are evaluated for accuracy and consistency using computer simulations, and for effectiveness using experimental data gathered from different real environments.",
"title": ""
},
{
"docid": "1026bd2ccbea3a7cbb0f337de6ce2981",
"text": "Helicobacter pylori (H. pylori) is an extremely common, yet underappreciated, pathogen that is able to alter host physiology and subvert the host immune response, allowing it to persist for the life of the host. H. pylori is the primary cause of peptic ulcers and gastric cancer. In the United States, the annual cost associated with peptic ulcer disease is estimated to be $6 billion and gastric cancer kills over 700000 people per year globally. The prevalence of H. pylori infection remains high (> 50%) in much of the world, although the infection rates are dropping in some developed nations. The drop in H. pylori prevalence could be a double-edged sword, reducing the incidence of gastric diseases while increasing the risk of allergies and esophageal diseases. The list of diseases potentially caused by H. pylori continues to grow; however, mechanistic explanations of how H. pylori could contribute to extragastric diseases lag far behind clinical studies. A number of host factors and H. pylori virulence factors act in concert to determine which individuals are at the highest risk of disease. These include bacterial cytotoxins and polymorphisms in host genes responsible for directing the immune response. This review discusses the latest advances in H. pylori pathogenesis, diagnosis, and treatment. Up-to-date information on correlations between H. pylori and extragastric diseases is also provided.",
"title": ""
},
{
"docid": "e5bbf88eedf547551d28a731bd4ebed7",
"text": "We conduct an empirical study to test the ability of convolutional neural networks (CNNs) to reduce the effects of nuisance transformations of the input data, such as location, scale and aspect ratio. We isolate factors by adopting a common convolutional architecture either deployed globally on the image to compute class posterior distributions, or restricted locally to compute class conditional distributions given location, scale and aspect ratios of bounding boxes determined by proposal heuristics. In theory, averaging the latter should yield inferior performance compared to proper marginalization. Yet empirical evidence suggests the converse, leading us to conclude that - at the current level of complexity of convolutional architectures and scale of the data sets used to train them - CNNs are not very effective at marginalizing nuisance variability. We also quantify the effects of context on the overall classification task and its impact on the performance of CNNs, and propose improved sampling techniques for heuristic proposal schemes that improve end-to-end performance to state-of-the-art levels. We test our hypothesis on a classification task using the ImageNet Challenge benchmark and on a wide-baseline matching task using the Oxford and Fischer's datasets.",
"title": ""
},
{
"docid": "e7fb4643c062e092a52ac84928ab46e9",
"text": "Object detection and tracking are main tasks in video surveillance systems. Extracting the background is an intensive task with high computational cost. This work proposes a hardware computing engine to perform background subtraction on low-cost field programmable gate arrays (FPGAs), focused on resource-limited environments. Our approach is based on the codebook algorithm and offers very low accuracy degradation. We have analyzed resource consumption and performance trade-offs in Spartan-3 FPGAs by Xilinx. In addition, an accuracy evaluation with standard benchmark sequences has been performed, obtaining better results than previous hardware approaches. The implementation is able to segment objects in sequences with resolution $$768\\times 576$$ at 50 fps using a robust and accurate approach, and an estimated power consumption of 5.13 W.",
"title": ""
},
{
"docid": "473968c14db4b189af126936fd5486ca",
"text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.",
"title": ""
},
{
"docid": "69bf97d8a40757a19ca9431c6bad0f07",
"text": "To detect scene text in the video is valuable to many content-based video applications. In this paper, we present a novel scene text detection and tracking method for videos, which effectively exploits the cues of the background regions of the text. Specifically, we first extract text candidates and potential background regions of text from the video frame. Then, we exploit the spatial, shape and motional correlations between the text and its background region with a bipartite graph model and the random walk algorithm to refine the text candidates for improved accuracy. We also present an effective tracking framework for text in the video, making use of the temporal correlation of text cues across successive frames, which contributes to enhancing both the precision and the recall of the final text detection result. Experiments on public scene text video datasets demonstrate the state-of-the-art performance of the proposed method.",
"title": ""
}
] |
scidocsrr
|
7f452369d45c64cece868ccc009e04e6
|
Real-Time Temporal Action Localization in Untrimmed Videos by Sub-Action Discovery
|
[
{
"docid": "ee9c0e79b29fbe647e3e0ccb168532b5",
"text": "We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15%, 7% and 12% respectively in mAP.",
"title": ""
},
{
"docid": "4829d8c0dd21f84c3afbe6e1249d6248",
"text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.",
"title": ""
}
] |
[
{
"docid": "4315cbfa13e9a32288c1857f231c6410",
"text": "The likelihood of soft errors increase with system complexity, reduction in operational voltages, exponential growth in transistors per chip, increases in clock frequencies and device shrinking. As the memory bit-cell area is condensed, single event upset that would have formerly despoiled only a single bit-cell are now proficient of upsetting multiple contiguous memory bit-cells per particle strike. While these error types are beyond the error handling capabilities of the frequently used error correction codes (ECCs) for single bit, the overhead associated with moving to more sophisticated codes for multi-bit errors is considered to be too costly. To address this issue, this paper presents a new approach to detect and correct multi-bit soft error by using Horizontal-Vertical-Double-Bit-Diagonal (HVDD) parity bits with a comparatively low overhead.",
"title": ""
},
{
"docid": "d8cd05b5a187e8bc3eacd8777fb36218",
"text": "In this article we review bony changes resulting from alterations in intracranial pressure (ICP) and the implications for ophthalmologists and the patients for whom we care. Before addressing ophthalmic implications, we will begin with a brief overview of bone remodeling. Bony changes seen with chronic intracranial hypotension and hypertension will be discussed. The primary objective of this review was to bring attention to bony changes seen with chronic intracranial hypotension. Intracranial hypotension skull remodeling can result in enophthalmos. In advanced disease enophthalmos develops to a degree that is truly disfiguring. The most common finding for which subjects are referred is ocular surface disease, related to loss of contact between the eyelids and the cornea. Other abnormalities seen include abnormal ocular motility and optic atrophy. Recognition of such changes is important to allow for diagnosis and treatment prior to advanced clinical deterioration. Routine radiographic assessment of bony changes may allow for the identification of patient with abnormal ICP prior to the development of clinically significant disease.",
"title": ""
},
{
"docid": "a24b4546eb2da7ce6ce70f45cd16e07d",
"text": "This paper examines the state of the art in mobile clinical and health-related apps. A 2012 estimate puts the number of health-related apps at no fewer than 40,000, as healthcare professionals and consumers continue to express concerns about the quality of many apps, calling for some form of app regulatory control or certification to be put in place. We describe the range of apps on offer as of 2013, and then present a brief survey of evaluation studies of medical and health-related apps that have been conducted to date, covering a range of clinical disciplines and topics. Our survey includes studies that highlighted risks, negative issues and worrying deficiencies in existing apps. We discuss the concept of 'apps as a medical device' and the relevant regulatory controls that apply in USA and Europe, offering examples of apps that have been formally approved using these mechanisms. We describe the online Health Apps Library run by the National Health Service in England and the calls for a vetted medical and health app store. We discuss the ingredients for successful apps beyond the rather narrow definition of 'apps as a medical device'. These ingredients cover app content quality, usability, the need to match apps to consumers' general and health literacy levels, device connectivity standards (for apps that connect to glucometers, blood pressure monitors, etc.), as well as app security and user privacy. 'Happtique Health App Certification Program' (HACP), a voluntary app certification scheme, successfully captures most of these desiderata, but is solely focused on apps targeting the US market. HACP, while very welcome, is in ways reminiscent of the early days of the Web, when many \"similar\" quality benchmarking tools and codes of conduct for information publishers were proposed to appraise and rate online medical and health information. It is probably impossible to rate and police every app on offer today, much like in those early days of the Web, when people quickly realised the same regarding informational Web pages. The best first line of defence was, is, and will always be to educate consumers regarding the potentially harmful content of (some) apps.",
"title": ""
},
{
"docid": "293e1834eef415f08e427a41e78d818f",
"text": "Autonomous robots are complex systems that require the interaction between numerous heterogeneous components (software and hardware). Because of the increase in complexity of robotic applications and the diverse range of hardware, robotic middleware is designed to manage the complexity and heterogeneity of the hardware and applications, promote the integration of new technologies, simplify software design, hide the complexity of low-level communication and the sensor heterogeneity of the sensors, improve software quality, reuse robotic software infrastructure across multiple research efforts, and to reduce production costs. This paper presents a literature survey and attribute-based bibliography of the current state of the art in robotic middleware design. The main aim of the survey is to assist robotic middleware researchers in evaluating the strengths and weaknesses of current approaches and their appropriateness for their applications. Furthermore, we provide a comprehensive set of appropriate bibliographic references that are classified based on middleware attributes.",
"title": ""
},
{
"docid": "84a2d26a0987a79baf597508543f39b6",
"text": "In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.",
"title": ""
},
{
"docid": "3a920687e57591c1abfaf10b691132a7",
"text": "BP3TKI Palembang is the government agencies that coordinate, execute and selection of prospective migrants registration and placement. To simplify the existing procedures and improve decision-making is necessary to build a decision support system (DSS) to determine eligibility for employment abroad by applying Fuzzy Multiple Attribute Decision Making (FMADM), using the linear sequential systems development methods. The system is built using Microsoft Visual Basic. Net 2010 and SQL Server 2008 database. The design of the system using use case diagrams and class diagrams to identify the needs of users and systems as well as systems implementation guidelines. Decision Support System which is capable of ranking the dihasialkan to prospective migrants, making it easier for parties to take keputusna BP3TKI the workers who will be flown out of the country.",
"title": ""
},
{
"docid": "359d76f0b4f758c3a58e886e840c5361",
"text": "Cover crops are important components of sustainable agricultural systems. They increase surface residue and aid in the reduction of soil erosion. They improve the structure and water-holding capacity of the soil and thus increase the effectiveness of applied N fertilizer. Legume cover crops such as hairy vetch and crimson clover fix nitrogen and contribute to the nitrogen requirements of subsequent crops. Cover crops can also suppress weeds, provide suitable habitat for beneficial predator insects, and act as non-host crops for nematodes and other pests in crop rotations. This paper reviews the agronomic and economic literature on using cover crops in sustainable food production and reports on past and present research on cover crops and sustainable agriculture at the Beltsville Agricultural Research Center, Maryland. Previous studies suggested that the profitability of cover crops is primarily the result of enhanced crop yields rather than reduced input costs. The experiments at the Beltsville Agricultural Research Center on fresh-market tomato production showed that tomatoes grown with hairy vetch mulch were higher yielding and more profitable than those grown with black polyethylene and no mulch system. Previous studies of cover crops in grain production indicated that legume cover crops such as hairy vetch and crimson clover are more profitable than grass cover crops such as rye or wheat because of the ability of legumes to contribute N to the following crop. A com-",
"title": ""
},
{
"docid": "e0ff61d4b5361c3e2b39265310d02b85",
"text": "This paper presents an adaptive technique for obtaining centers of the hidden layer neurons of radial basis function neural network (RBFNN) for face recognition. The proposed technique uses firefly algorithm to obtain natural sub-clusters of training face images formed due to variations in pose, illumination, expression and occlusion, etc. Movement of fireflies in a hyper-dimensional input space is controlled by tuning the parameter gamma (γ) of firefly algorithm which plays an important role in maintaining the trade-off between effective search space exploration, firefly convergence, overall computational time and the recognition accuracy. The proposed technique is novel as it combines the advantages of evolutionary firefly algorithm and RBFNN in adaptive evolution of number and centers of hidden neurons. The strength of the proposed technique lies in its fast convergence, improved face recognition performance, reduced feature selection overhead and algorithm stability. The proposed technique is validated using benchmark face databases, namely ORL, Yale, AR and LFW. The average face recognition accuracies achieved using proposed algorithm for the above face databases outperform some of the existing techniques in face recognition.",
"title": ""
},
{
"docid": "4f0e454b8274636c56a1617668f08eed",
"text": "Mobile devices are an important part of our everyday lives, and the Android platform has become a market leader. In recent years a number of approaches for Android malware detection have been proposed, using permissions, source code analysis, or dynamic analysis. In this paper, we propose to use a probabilistic discriminative model based on regularized logistic regression for Android malware detection. Through extensive experimental evaluation, we demonstrate that it can generate probabilistic outputs with highly accurate classification results. In particular, we propose to use Android API calls as features extracted from decompiled source code, and analyze and explore issues in feature granularity, feature representation, feature selection, and regularization. We show that the probabilistic discriminative model also works well with permissions, and substantially outperforms the state-of-the-art methods for Android malware detection with application permissions. Furthermore, the discriminative learning model achieves the best detection results by combining both decompiled source code and application permissions. To the best of our knowledge, this is the first research that proposes probabilistic discriminative model for Android malware detection with a thorough study of desired representation of decompiled source code and is the first research work for Android malware detection task that combines both analysis of decompiled source code and application permissions.",
"title": ""
},
{
"docid": "5b134fae94a5cc3a2e1b7cc19c5d29e5",
"text": "We explore making virtual desktops behave in a more physically realistic manner by adding physics simulation and using piling instead of filing as the fundamental organizational structure. Objects can be casually dragged and tossed around, influenced by physical characteristics such as friction and mass, much like we would manipulate lightweight objects in the real world. We present a prototype, called BumpTop, that coherently integrates a variety of interaction and visualization techniques optimized for pen input we have developed to support this new style of desktop organization.",
"title": ""
},
{
"docid": "34c3ba06f9bffddec7a08c8109c7f4b9",
"text": "The role of e-learning technologies entirely depends on the acceptance and execution of required-change in the thinking and behaviour of the users of institutions. The research are constantly reporting that many e-learning projects are falling short of their objectives due to many reasons but on the top is the user resistance to change according to the digital requirements of new era. It is argued that the suitable way for change management in e-learning environment is the training and persuading of users with a view to enhance their digital literacy and thus gradually changing the users’ attitude in positive direction. This paper discusses change management in transition to e-learning system considering pedagogical, cost and technical implications. It also discusses challenges and opportunities for integrating these technologies in higher learning institutions with examples from Turkey GATA (Gülhane Askeri Tıp Akademisi-Gülhane Military Medical Academy).",
"title": ""
},
{
"docid": "851a966bbfee843e5ae1eaf21482ef87",
"text": "The Pittsburgh Sleep Quality Index (PSQI) is a widely used measure of sleep quality in adolescents, but information regarding its psychometric strengths and weaknesses in this population is limited. In particular, questions remain regarding whether it measures one or two sleep quality domains. The aims of the present study were to (a) adapt the PSQI for use in adolescents and young adults, and (b) evaluate the psychometric properties of the adapted measure in this population. The PSQI was slightly modified to make it more appropriate for use in youth populations and was translated into Spanish for administration to the sample population available to the study investigators. It was then administered with validity criterion measures to a community-based sample of Spanish adolescents and young adults (AYA) between 14 and 24 years old (N = 216). The results indicated that the questionnaire (AYA-PSQI-S) assesses a single factor. The total score evidenced good convergent and divergent validity and moderate reliability (Cronbach's alpha = .72). The AYA-PSQI-S demonstrates adequate psychometric properties for use in clinical trials involving adolescents and young adults. Additional research to further evaluate the reliability and validity of the measure for use in clinical settings is warranted.",
"title": ""
},
{
"docid": "10da9f0fd1be99878e280d261ea81ba3",
"text": "The fuzzy vault scheme is a cryptographic primitive being considered for storing fingerprint minutiae protected. A well-known problem of the fuzzy vault scheme is its vulnerability against correlation attack -based cross-matching thereby conflicting with the unlinkability requirement and irreversibility requirement of effective biometric information protection. Yet, it has been demonstrated that in principle a minutiae-based fuzzy vault can be secured against the correlation attack by passing the to-beprotected minutiae through a quantization scheme. Unfortunately, single fingerprints seem not to be capable of providing an acceptable security level against offline attacks. To overcome the aforementioned security issues, this paper shows how an implementation for multiple fingerprints can be derived on base of the implementation for single finger thereby making use of a Guruswami-Sudan algorithm-based decoder for verification. The implementation, of which public C++ source code can be downloaded, is evaluated for single and various multi-finger settings using the MCYTFingerprint-100 database and provides security enhancing features such as the possibility of combination with password and a slow-down mechanism.",
"title": ""
},
{
"docid": "782c8958fa9107b8d1087fe0c79de6ee",
"text": "Credit evaluation is one of the most important and difficult tasks for credit card companies, mortgage companies, banks and other financial institutes. Incorrect credit judgement causes huge financial losses. This work describes the use of an evolutionary-fuzzy system capable of classifying suspicious and non-suspicious credit card transactions. The paper starts with the details of the system used in this work. A series of experiments are described, showing that the complete system is capable of attaining good accuracy and intelligibility levels for real data.",
"title": ""
},
{
"docid": "36776b1372e745f683ca66e7c4421a76",
"text": "This paper presents the analyzed results of rotational torque and suspension force in a bearingless motor with the short-pitch winding, which are based on the computation by finite element method (FEM). The bearingless drive technique is applied to a conventional brushless DC motor, in which the stator windings are arranged at the short-pitch, and encircle only a single stator tooth. At first, the winding arrangement in the stator core, the principle of suspension force generation and the magnetic suspension control method are shown in the bearingless motor with brushless DC structure. The torque and suspension force are computed by FEM using a machine model with the short-pitch winding arrangement, and the computed results are compared between the full-pitch and short-pitch winding arrangements. The advantages of short-pitch winding arrangement are found on the basis of computed results and discussion.",
"title": ""
},
{
"docid": "d18a636768e6aea2e84c7fc59593ec89",
"text": "Enterprise social networking (ESN) techniques have been widely adopted by firms to provide a platform for public communication among employees. This study investigates how the relationships between stressors (i.e., challenge and hindrance stressors) and employee innovation are moderated by task-oriented and relationship-oriented ESN use. Since challenge-hindrance stressors and employee innovation are individual-level variables and task-oriented ESN use and relationship-oriented ESN use are team-level variables, we thus use hierarchical linear model to test this cross-level model. The results of a survey of 191 employees in 50 groups indicate that two ESN use types differentially moderate the relationship between stressors and employee innovation. Specifically, task-oriented ESN use positively moderates the effects of the two stressors on employee innovation, while relationship-oriented ESN use negatively moderates the relationship between the two stressors and employee innovation. In addition, we find that challenge stressors significantly improve employee innovation. Theoretical and practical implications are discussed.",
"title": ""
},
{
"docid": "73ec43c5ed8e245d0a1ff012a6a67f76",
"text": "HERE IS MUCH signal processing devoted to detection and estimation. Detection is the task of detetmitdng if a specific signal set is pteaettt in an obs&tion, whflc estimation is the task of obtaining the va.iues of the parameters derriblng the signal. Often the s@tal is complicated or is corrupted by interfeting signals or noise To facilitate the detection and estimation of signal sets. the obsenation is decomposed by a basis set which spans the signal space [ 1) For many problems of engineering interest, the class of aigttlls being sought are periodic which leads quite natuallv to a decomposition by a basis consistittg of simple petiodic fun=tions, the sines and cosines. The classic Fourier tran.,fot,,, h the mechanism by which we M able to perform this decomposttmn. BY necessity, every observed signal we pmmust be of finite extent. The extent may be adjustable and Axtable. but it must be fire. Proces%ng a fiite-duration observation ~POSCS mteresting and interacting considentior,s on the hamomc analysic rhese consldentions include detectability of tones in the Presence of nearby strong tones, rcoohability of similarstrength nearby tones, tesolvability of Gxifting tona, and biases in estimating the parameten of my of the alonmenhoned signals. For practicality, the data we pare N unifomdy spaced samples of the obsetvcd signal. For convenience. N is highJy composite, and we will zwtme N is evett. The harmottic estm~afes we obtain UtmugJt the discrae Fowie~ tmnsfotm (DFT) arc N mifcwmly spaced samples of the asaciated periodic spectra. This approach in elegant and attnctive when the proce~ scheme is cast as a spectral decomposition in an N-dimensional orthogonal vector space 121. Unfottunately, in mmY practical situations, to obtain meaningful results this elegance must be compmmised. One such t=O,l;..,Nl.N.N+l.",
"title": ""
},
{
"docid": "295212e614cc361b1a5fdd320d39f68b",
"text": "Aiming to meet the explosive growth of mobile data traffic and reduce the network congestion, we study Time Dependent Adaptive Pricing (TDAP) with threshold policies to motivate users to shift their Internet access from peak hours to off-peak hours. With the proposed TDAP scheme, Internet Service Providers (ISPs) will be able to use less network capacity to provide users Internet access service with the same QoS. Simulation and analysis are carried out to investigate the performance of the proposed TDAP scheme based on the real Internet traffic pattern.",
"title": ""
},
{
"docid": "d6a6ee23cd1d863164c79088f75ece30",
"text": "In our work, 3D objects classification has been dealt with convolutional neural networks which is a common paradigm recently in image recognition. In the first phase of experiments, 3D models in ModelNet10 and ModelNet40 data sets were voxelized and scaled with certain parameters. Classical CNN and 3D Dense CNN architectures were designed for training the pre-processed data. In addition, the two trained CNNs were ensembled and the results of them were observed. A success rate of 95.37% achieved on ModelNet10 by using 3D dense CNN, a success rate of 91.24% achieved with ensemble of two CNNs on ModelNet40.",
"title": ""
},
{
"docid": "7279065640e6f2b7aab7a6e91118e0d5",
"text": "Erythrocyte injury such as osmotic shock, oxidative stress or energy depletion stimulates the formation of prostaglandin E2 through activation of cyclooxygenase which in turn activates a Ca2+ permeable cation channel. Increasing cytosolic Ca2+ concentrations activate Ca2+ sensitive K+ channels leading to hyperpolarization, subsequent loss of KCl and (further) cell shrinkage. Ca2+ further stimulates a scramblase shifting phosphatidylserine from the inner to the outer cell membrane. The scramblase is sensitized for the effects of Ca2+ by ceramide which is formed by a sphingomyelinase following several stressors including osmotic shock. The sphingomyelinase is activated by platelet activating factor PAF which is released by activation of phospholipase A2. Phosphatidylserine at the erythrocyte surface is recognised by macrophages which engulf and degrade the affected cells. Moreover, phosphatidylserine exposing erythrocytes may adhere to the vascular wall and thus interfere with microcirculation. Erythrocyte shrinkage and phosphatidylserine exposure ('eryptosis') mimic features of apoptosis in nucleated cells which however, involves several mechanisms lacking in erythrocytes. In kidney medulla, exposure time is usually too short to induce eryptosis despite high osmolarity. Beyond that high Cl- concentrations inhibit the cation channel and high urea concentrations the sphingomyelinase. Eryptosis is inhibited by erythropoietin which thus extends the life span of circulating erythrocytes. Several conditions trigger premature eryptosis thus favouring the development of anemia. On the other hand, eryptosis may be a mechanism of defective erythrocytes to escape hemolysis. Beyond their significance for erythrocyte survival and death the mechanisms involved in 'eryptosis' may similarly contribute to apoptosis of nucleated cells.",
"title": ""
}
] |
scidocsrr
|
0a81286afb381a9f6e2825a03f13265d
|
Prediction of long-term clinical outcomes using simple functional exercise performance tests in patients with COPD: a 5-year prospective cohort study
|
[
{
"docid": "0dc0815505f065472b3929792de638b4",
"text": "Our aim was to comprehensively validate the 1-min sit-to-stand (STS) test in chronic obstructive pulmonary disease (COPD) patients and explore the physiological response to the test.We used data from two longitudinal studies of COPD patients who completed inpatient pulmonary rehabilitation programmes. We collected 1-min STS test, 6-min walk test (6MWT), health-related quality of life, dyspnoea and exercise cardiorespiratory data at admission and discharge. We assessed the learning effect, test-retest reliability, construct validity, responsiveness and minimal important difference of the 1-min STS test.In both studies (n=52 and n=203) the 1-min STS test was strongly correlated with the 6MWT at admission (r=0.59 and 0.64, respectively) and discharge (r=0.67 and 0.68, respectively). Intraclass correlation coefficients (95% CI) between 1-min STS tests were 0.93 (0.83-0.97) for learning effect and 0.99 (0.97-1.00) for reliability. Standardised response means (95% CI) were 0.87 (0.58-1.16) and 0.91 (0.78-1.07). The estimated minimal important difference was three repetitions. End-exercise oxygen consumption, carbon dioxide output, ventilation, breathing frequency and heart rate were similar in the 1-min STS test and 6MWT.The 1-min STS test is a reliable, valid and responsive test for measuring functional exercise capacity in COPD patients and elicited a physiological response comparable to that of the 6MWT.",
"title": ""
}
] |
[
{
"docid": "b25379a7a48ef2b6bcc2df8d84d7680b",
"text": "Microblogging (Twitter or Facebook) has become a very popular communication tool among Internet users in recent years. Information is generated and managed through either computer or mobile devices by one person and is consumed by many other persons, with most of this user-generated content being textual information. As there are a lot of raw data of people posting real time messages about their opinions on a variety of topics in daily life, it is a worthwhile research endeavor to collect and analyze these data, which may be useful for users or managers to make informed decisions, for example. However this problem is challenging because a micro-blog post is usually very short and colloquial, and traditional opinion mining algorithms do not work well in such type of text. Therefore, in this paper, we propose a new system architecture that can automatically analyze the sentiments of these messages. We combine this system with manually annotated data from Twitter, one of the most popular microblogging platforms, for the task of sentiment analysis. In this system, machines can learn how to automatically extract the set of messages which contain opinions, filter out nonopinion messages and determine their sentiment directions (i.e. positive, negative). Experimental results verify the effectiveness of our system on sentiment analysis in real microblogging applications.",
"title": ""
},
{
"docid": "2bba03660a752f7033e8ecd95eb6bdbd",
"text": "Crowdsensing has the potential to support human-driven sensing and data collection at an unprecedented scale. While many organizers of data collection campaigns may have extensive domain knowledge, they do not necessarily have the skills required to develop robust software for crowdsensing. In this paper, we present Mobile Campaign Designer, a tool that simplifies the creation of mobile crowdsensing applications. Using Mobile Campaign Designer, an organizer is able to define parameters about their crowdsensing campaign, and the tool generates the source code and an executable for a tailored mobile application that embodies the current best practices in crowdsensing. An evaluation of the tool shows that users at all levels of technical expertise are capable of creating a crowdsensing application in an average of five minutes, and the generated applications are comparable in quality to existing crowdsensing applications.",
"title": ""
},
{
"docid": "125259c4471d4250214fec50b5e97522",
"text": "The switched reluctance motor (SRM) is a promising drive solution for electric vehicle propulsion thanks to its simple, rugged structure, satisfying performance and low price. Among other SRMs, the axial flux SRM (AFSRM) is a strong candidate for in-wheel drive applications because of its high torque/power density and compact disc shape. In this paper, a four-phase 8-stator-pole 6-rotor-pole double-rotor AFSRM is investigated for an e-bike application. A series of analyses are conducted to reduce the torque ripple by shaping the rotor poles, and a multi-level air gap geometry is designed with specific air gap dimensions at different positions. Both static and dynamic analyses show significant torque ripple reduction while maintaining the average electromagnetic output torque at the demanded level.",
"title": ""
},
{
"docid": "78f4ac2d266d64646a7d9bc735257f9d",
"text": "To achieve dynamic inference in pixel labeling tasks, we propose Pixel-wise Attentional Gating (PAG), which learns to selectively process a subset of spatial locations at each layer of a deep convolutional network. PAG is a generic, architecture-independent, problem-agnostic mechanism that can be readily “plugged in” to an existing model with fine-tuning. We utilize PAG in two ways: 1) learning spatially varying pooling fields that improve model performance without the extra computation cost associated with multi-scale pooling, and 2) learning a dynamic computation policy for each pixel to decrease total computation (FLOPs) while maintaining accuracy. We extensively evaluate PAG on a variety of per-pixel labeling tasks, including semantic segmentation, boundary detection, monocular depth and surface normal estimation. We demonstrate that PAG allows competitive or state-ofthe-art performance on these tasks. Our experiments show that PAG learns dynamic spatial allocation of computation over the input image which provides better performance trade-offs compared to related approaches (e.g., truncating deep models or dynamically skipping whole layers). Generally, we observe PAG can reduce computation by 10% without noticeable loss in accuracy and performance degrades gracefully when imposing stronger computational constraints.",
"title": ""
},
{
"docid": "f53f739dd526e3f954aabded123f0710",
"text": "Successful Free/Libre Open Source Software (FLOSS) projects must attract and retain high-quality talent. Researchers have invested considerable effort in the study of core and peripheral FLOSS developers. To this point, one critical subset of developers that have not been studied are One-Time code Contributors (OTC) – those that have had exactly one patch accepted. To understand why OTCs have not contributed another patch and provide guidance to FLOSS projects on retaining OTCs, this study seeks to understand the impressions, motivations, and barriers experienced by OTCs. We conducted an online survey of OTCs from 23 popular FLOSS projects. Based on the 184 responses received, we observed that OTCs generally have positive impressions of their FLOSS project and are driven by a variety of motivations. Most OTCs primarily made contributions to fix bugs that impeded their work and did not plan on becoming long term contributors. Furthermore, OTCs encounter a number of barriers that prevent them from continuing to contribute to the project. Based on our findings, there are some concrete actions FLOSS projects can take to increase the chances of converting OTCs into long-term contributors.",
"title": ""
},
{
"docid": "21916d34fb470601fb6376c4bcd0839a",
"text": "BACKGROUND\nCutibacterium (Propionibacterium) acnes is assumed to play an important role in the pathogenesis of acne.\n\n\nOBJECTIVES\nTo examine if clones with distinct virulence properties are associated with acne.\n\n\nMETHODS\nMultiple C. acnes isolates from follicles and surface skin of patients with moderate to severe acne and healthy controls were characterized by multilocus sequence typing. To determine if CC18 isolates from acne patients differ from those of controls in the possession of virulence genes or lack of genes conducive to a harmonious coexistence the full genomes of dominating CC18 follicular clones from six patients and five controls were sequenced.\n\n\nRESULTS\nIndividuals carried one to ten clones simultaneously. The dominating C. acnes clones in follicles from acne patients were exclusively from the phylogenetic clade I-1a and all belonged to clonal complex CC18 with the exception of one patient dominated by the worldwide-disseminated and often antibiotic resistant clone ST3. The clonal composition of healthy follicles showed a more heterogeneous pattern with follicles dominated by clones representing the phylogenetic clades I-1a, I-1b, I-2 and II. Comparison of follicular CC18 gene contents, allelic versions of putative virulence genes and their promoter regions, and 54 variable-length intragenic and inter-genic homopolymeric tracts showed extensive conservation and no difference associated with the clinical origin of isolates.\n\n\nCONCLUSIONS\nThe study supports that C. acnes strains from clonal complex CC18 and the often antibiotic resistant clone ST3 are associated with acne and suggests that susceptibility of the host rather than differences within these clones may determine the clinical outcome of colonization.",
"title": ""
},
{
"docid": "c157b149d334b2cc1f718d70ef85e75e",
"text": "The large inter-individual variability within the normal population, the limited reproducibility due to habituation or fatigue, and the impact of instruction and the subject's motivation, all constitute a major problem in posturography. These aspects hinder reliable evaluation of the changes in balance control in the case of disease and complicate objectivation of the impact of therapy and sensory input on balance control. In this study, we examine whether measurement of balance control near individualized limits of stability and under very challenging sensory conditions might reduce inter- and intra-individual variability compared to the well-known Sensory Organization Test (SOT). To do so, subjects balance on a platform on which instability increases automatically until body orientation or body sway velocity surpasses a safety limit. The maximum tolerated platform instability is then used as a measure for balance control under 10 different sensory conditions. Ninety-seven healthy subjects and 107 patients suffering from chronic dizziness (whiplash syndrome (n = 25), Meniere's disease (n = 28), acute (n = 28) or gradual (n = 26) peripheral function loss) were tested. In both healthy subjects and patients this approach resulted in a low intra-individual variability (< 14.5(%). In healthy subjects and patients, balance control was maximally affected by closure of the eyes and by vibration of the Achilles' tendons. The other perturbation techniques applied (sway referenced vision or platform, cooling of the foot soles) were less effective. Combining perturbation techniques reduced balance control even more, but the effect was less than the linear summation of the effect induced by the techniques applied separately. The group averages of healthy subjects show that vision contributed maximum 37%, propriocepsis minimum 26%, and labyrinths maximum 44% to balance control in healthy subjects. However, a large inter-individual variability was observed. Balance control of each patient group was less than in healthy subjects in all sensory conditions. Similar to healthy subjects, patients also show a large inter-individual variability, which results in a low sensitivity of the test. With the exception of some minor differences between Whiplash and Meniere patients, balance control did not differ between the four patient groups. This points to a low specificity of the test. Balance control was not correlated with the outcome of the standard vestibular examination. This study strengthens our notion that the contribution of the sensory inputs to balance control differs considerably per individual and may simply be due to differences in the vestibular function related to the specific pathology, but also to differences in motor learning strategies in relation to daily life requirements. It is difficult to provide clinically relevant normative data. We conclude that, like the SOT, the current test is merely a functional test of balance with limited diagnostic value.",
"title": ""
},
{
"docid": "f562bd72463945bd35d42894e4815543",
"text": "Sound levels in animal shelters regularly exceed 100 dB. Noise is a physical stressor on animals that can lead to behavioral, physiological, and anatomical responses. There are currently no policies regulating noise levels in dog kennels. The objective of this study was to evaluate the noise levels dogs are exposed to in an animal shelter on a continuous basis and to determine the need, if any, for noise regulations. Noise levels at a newly constructed animal shelter were measured using a noise dosimeter in all indoor dog-holding areas. These holding areas included large dog adoptable, large dog stray, small dog adoptable, small dog stray, and front intake. The noise level was highest in the large adoptable area. Sound from the large adoptable area affected some of the noise measurements for the other rooms. Peak noise levels regularly exceeded the measuring capability of the dosimeter (118.9 dBA). Often, in new facility design, there is little attention paid to noise abatement, despite the evidence that noise causes physical and psychological stress on dogs. To meet their behavioral and physical needs, kennel design should also address optimal sound range.",
"title": ""
},
{
"docid": "c89b903e497ebe8e8d89e8d1d931fae1",
"text": "Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "eb6ee2fd1f7f1d0d767e4dde2d811bed",
"text": "This paper presents a mathematical framework for performance analysis of Behavior Trees (BTs). BTs are a recent alternative to Finite State Machines (FSMs), for doing modular task switching in robot control architectures. By encoding the switching logic in a tree structure, instead of distributing it in the states of a FSM, modularity and reusability are improved. In this paper, we compute performance measures, such as success/failure probabilities and execution times, for plans encoded and executed by BTs. To do this, we first introduce Stochastic Behavior Trees (SBT), where we assume that the probabilistic performance measures of the basic action controllers are given. We then show how Discrete Time Markov Chains (DTMC) can be used to aggregate these measures from one level of the tree to the next. The recursive structure of the tree then enables us to step by step propagate such estimates from the leaves (basic action controllers) to the root (complete task execution). Finally, we verify our analytical results using massive Monte Carlo simulations, and provide an illustrative example of the results for a complex robotic task.",
"title": ""
},
{
"docid": "eb3f72e91f13a3c6faee53c6d4cd4174",
"text": "Recent studies indicate that nearly 75% of queries issued to Web search engines aim at finding information about entities, which are material objects or concepts that exist in the real world or fiction (e.g. people, organizations, products, etc.). Most common information needs underlying this type of queries include finding a certain entity (e.g. “Einstein relativity theory”), a particular attribute or property of an entity (e.g. “Who founded Intel?”) or a list of entities satisfying a certain criteria (e.g. “Formula 1 drivers that won the Monaco Grand Prix”). These information needs can be efficiently addressed by presenting structured information about a target entity or a list of entities retrieved from a knowledge graph either directly as search results or in addition to the ranked list of documents. This tutorial provides a summary of the recent research in knowledge graph entity representation methods and retrieval models. The first part of this tutorial introduces state-of-the-art methods for entity representation, from multi-fielded documents with flat and hierarchical structure to latent dimensional representations based on tensor factorization, while the second part presents recent developments in entity retrieval models, including Fielded Sequential Dependence Model (FSDM) and its parametric extension (PFSDM), as well as entity set expansion and ranking methods.",
"title": ""
},
{
"docid": "e98e902e22d9b8acb6e9e9dcd241471c",
"text": "We introduce a novel iterative approach for event coreference resolution that gradually builds event clusters by exploiting inter-dependencies among event mentions within the same chain as well as across event chains. Among event mentions in the same chain, we distinguish withinand cross-document event coreference links by using two distinct pairwise classifiers, trained separately to capture differences in feature distributions of withinand crossdocument event clusters. Our event coreference approach alternates between WD and CD clustering and combines arguments from both event clusters after every merge, continuing till no more merge can be made. And then it performs further merging between event chains that are both closely related to a set of other chains of events. Experiments on the ECB+ corpus show that our model outperforms state-of-the-art methods in joint task of WD and CD event coreference resolution.",
"title": ""
},
{
"docid": "2d0d42a6c712d93ace0bf37ffe786a75",
"text": "Personalized search systems tailor search results to the current user intent using historic search interactions. This relies on being able to find pertinent information in that user's search history, which can be challenging for unseen queries and for new search scenarios. Building richer models of users' current and historic search tasks can help improve the likelihood of finding relevant content and enhance the relevance and coverage of personalization methods. The task-based approach can be applied to the current user's search history, or as we focus on here, all users' search histories as so-called \"groupization\" (a variant of personalization whereby other users' profiles can be used to personalize the search experience). We describe a method whereby we mine historic search-engine logs to find other users performing similar tasks to the current user and leverage their on-task behavior to identify Web pages to promote in the current ranking. We investigate the effectiveness of this approach versus query-based matching and finding related historic activity from the current user (i.e., group versus individual). As part of our studies we also explore the use of the on-task behavior of particular user cohorts, such as people who are expert in the topic currently being searched, rather than all other users. Our approach yields promising gains in retrieval performance, and has direct implications for improving personalization in search systems.",
"title": ""
},
{
"docid": "190bf6cd8a2e9a5764b42d01b7aec7c8",
"text": "We propose a method for compiling a class of Σ-protocols (3-move public-coin protocols) into non-interactive zero-knowledge arguments. The method is based on homomorphic encryption and does not use random oracles. It only requires that a private/public key pair is set up for the verifier. The method applies to all known discrete-log based Σ-protocols. As applications, we obtain non-interactive threshold RSA without random oracles, and non-interactive zero-knowledge for NP more efficiently than by previous methods.",
"title": ""
},
{
"docid": "2a0577aa61ca1cbde207306fdb5beb08",
"text": "In recent years, researchers have shown that unwanted web tracking is on the rise, as advertisers are trying to capitalize on users' online activity, using increasingly intrusive and sophisticated techniques. Among these, browser fingerprinting has received the most attention since it allows trackers to uniquely identify users despite the clearing of cookies and the use of a browser's private mode. In this paper, we investigate and quantify the fingerprintability of browser extensions, such as, AdBlock and Ghostery. We show that an extension's organic activity in a page's DOM can be used to infer its presence, and develop XHound, the first fully automated system for fingerprinting browser extensions. By applying XHound to the 10,000 most popular Google Chrome extensions, we find that a significant fraction of popular browser extensions are fingerprintable and could thus be used to supplement existing fingerprinting methods. Moreover, by surveying the installed extensions of 854 users, we discover that many users tend to install different sets of fingerprintable browser extensions and could thus be uniquely, or near-uniquely identifiable by extension-based fingerprinting. We use XHound's results to build a proof-of-concept extension-fingerprinting script and show that trackers can fingerprint tens of extensions in just a few seconds. Finally, we describe why the fingerprinting of extensions is more intrusive than the fingerprinting of other browser and system properties, and sketch two different approaches towards defending against extension-based fingerprinting.",
"title": ""
},
{
"docid": "f794b6914cc99fcd2a13b81e6fbe12d2",
"text": "An unprecedented rise in the number of asylum seekers and refugees was seen in Europe in 2015, and it seems that numbers are not going to be reduced considerably in 2016. Several studies have tried to estimate risk of infectious diseases associated with migration but only very rarely these studies make a distinction on reason for migration. In these studies, workers, students, and refugees who have moved to a foreign country are all taken to have the same disease epidemiology. A common disease epidemiology across very different migrant groups is unlikely, so in this review of infectious diseases in asylum seekers and refugees, we describe infectious disease prevalence in various types of migrants. We identified 51 studies eligible for inclusion. The highest infectious disease prevalence in refugee and asylum seeker populations have been reported for latent tuberculosis (9-45%), active tuberculosis (up to 11%), and hepatitis B (up to 12%). The same population had low prevalence of malaria (7%) and hepatitis C (up to 5%). There have been recent case reports from European countries of cutaneous diphtheria, louse-born relapsing fever, and shigella in the asylum-seeking and refugee population. The increased risk that refugees and asylum seekers have for infection with specific diseases can largely be attributed to poor living conditions during and after migration. Even though we see high transmission in the refugee populations, there is very little risk of spread to the autochthonous population. These findings support the efforts towards creating a common European standard for the health reception and reporting of asylum seekers and refugees.",
"title": ""
},
{
"docid": "be3bf1e95312cc0ce115e3aaac2ecc96",
"text": "This paper contributes a first study into how different human users deliver simultaneous control and feedback signals during human-robot interaction. As part of this work, we formalize and present a general interactive learning framework for online cooperation between humans and reinforcement learning agents. In many humanmachine interaction settings, there is a growing gap between the degrees-of-freedom of complex semi-autonomous systems and the number of human control channels. Simple human control and feedback mechanisms are required to close this gap and allow for better collaboration between humans and machines on complex tasks. To better inform the design of concurrent control and feedback interfaces, we present experimental results from a human-robot collaborative domain wherein the human must simultaneously deliver both control and feedback signals to interactively train an actor-critic reinforcement learning robot. We compare three experimental conditions: 1) human delivered control signals, 2) reward-shaping feedback signals, and 3) simultaneous control and feedback. Our results suggest that subjects provide less feedback when simultaneously delivering feedback and control signals and that control signal quality is not significantly diminished. Our data suggest that subjects may also modify when and how they provide feedback. Through algorithmic development and tuning informed by this study, we expect semi-autonomous actions of robotic agents can be better shaped by human feedback, allowing for seamless collaboration and improved performance in difficult interactive domains. University of Alberta, Dep. of Computing Science, Edmonton, Canada University of Alberta, Deps. of Medicine and Computing Science, Edmonton, Alberta, Canada. Correspondence to: Kory Mathewson <[email protected]>. Under review for the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the authors. Figure 1. Experimental configuration. One of the study participants with the Myo band on their right arm providing a control signal, while simultaneously providing feedback signals with their left hand. The Aldebaran Nao robot simulation is visible on the screen alongside experimental logging.",
"title": ""
},
{
"docid": "e4fb31ebacb093932517719884264b46",
"text": "Monitoring and control the environmental parameters in agricultural constructions are essential to improve energy efficiency and productivity. Real-time monitoring allows the detection and early correction of unfavourable situations, optimizing consumption and protecting crops against diseases. This work describes an automatic system for monitoring farm environments with the aim of increasing efficiency and quality of the agricultural environment. Based on the Internet of Things, the system uses a low-cost wireless sensor network, called Sun Spot, programmed in Java, with the Java VM running on the device itself and the Arduino platform for Internet connection. The data collected is shared through the social network of Facebook. The temperature and brightness parameters are monitored in real time. Other sensors can be added to monitor the issue for specific purposes. The results show that conditions within greenhouses may in some cases be very different from those expected. Therefore, the proposed system can provide an effective tool to improve the quality of agricultural production and energy efficiency.",
"title": ""
},
{
"docid": "370ec5c556b70ead92bc45d1f419acaf",
"text": "Despite the identification of circulating tumor cells (CTCs) and cell-free DNA (cfDNA) as potential blood-based biomarkers capable of providing prognostic and predictive information in cancer, they have not been incorporated into routine clinical practice. This resistance is due in part to technological limitations hampering CTC and cfDNA analysis, as well as a limited understanding of precisely how to interpret emergent biomarkers across various disease stages and tumor types. In recognition of these challenges, a group of researchers and clinicians focused on blood-based biomarker development met at the Canadian Cancer Trials Group (CCTG) Spring Meeting in Toronto, Canada on 29 April 2016 for a workshop discussing novel CTC/cfDNA technologies, interpretation of data obtained from CTCs versus cfDNA, challenges regarding disease evolution and heterogeneity, and logistical considerations for incorporation of CTCs/cfDNA into clinical trials, and ultimately into routine clinical use. The objectives of this workshop included discussion of the current barriers to clinical implementation and recent progress made in the field, as well as fueling meaningful collaborations and partnerships between researchers and clinicians. We anticipate that the considerations highlighted at this workshop will lead to advances in both basic and translational research and will ultimately impact patient management strategies and patient outcomes.",
"title": ""
},
{
"docid": "86fca69ae48592e06109f7b05180db28",
"text": "Background: The software development industry has been adopting agile methods instead of traditional software development methods because they are more flexible and can bring benefits such as handling requirements changes, productivity gains and business alignment. Objective: This study seeks to evaluate, synthesize, and present aspects of research on agile methods tailoring including the method tailoring approaches adopted and the criteria used for agile practice selection. Method: The method adopted was a Systematic Literature Review (SLR) on studies published from 2002 to 2014. Results: 56 out of 783 papers have been identified as describing agile method tailoring approaches. These studies have been identified as case studies regarding the empirical research, as solution proposals regarding the research type, and as evaluation studies regarding the research validation type. Most of the papers used method engineering to implement tailoring and were not specific to any agile method on their scope. Conclusion: Most of agile methods tailoring research papers proposed or improved a technique, were implemented as case studies analyzing one case in details and validated their findings using evaluation. Method engineering was the base for tailoring, the approaches are independent of agile method and the main criteria used are internal environment and objectives variables. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
65d3119be08434129f24596f5b03613b
|
5G mm-Wave front-end-module design with advanced SOI process
|
[
{
"docid": "707a9773d79e04e8ee517845faa8e79f",
"text": "In this paper, we discuss a DC-20GHz single-pole double-throw (SPDT) transmit/receive switch (T/R switch) design in 45nm SOI process. This circuit is dedicated to fully integrated CMOS RF front end modules for X/Ku band satellite communication applications. The switch exhibits a measured insertion loss of 0.59dB, return loss of 23dB, and isolation of 17dB at 14GHz. The input 1dB compression point is 31.5dBm, and one-tone IIP3 is 63.8dBm. This state of the art performance is comparable or even better than existing commercial GaAs SPDT in this frequency range. The core area is only 90um × 100um, which is very helpful for low cost large element phase array designs.",
"title": ""
}
] |
[
{
"docid": "4a779f5e15cc60f131a77c69e09e54bc",
"text": "We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular focus on problems arising in image processing. We are motivated by the problem of restoring noisy and blurry images via variational methods by using total variation regularization. We obtain rigorous convergence results and effective stopping criteria for the general procedure. The numerical results for denoising appear to give significant improvement over standard models, and preliminary results for deblurring/denoising are very encouraging.",
"title": ""
},
{
"docid": "bd1c93dfc02d90ad2a0c7343236342a7",
"text": "Osteochondritis dissecans (OCD) of the capitellum is an uncommon disorder seen primarily in the adolescent overhead athlete. Unlike Panner disease, a self-limiting condition of the immature capitellum, OCD is multifactorial and likely results from microtrauma in the setting of cartilage mismatch and vascular susceptibility. The natural history of OCD is poorly understood, and degenerative joint disease may develop over time. Multiple modalities aid in diagnosis, including radiography, MRI, and magnetic resonance arthrography. Lesion size, location, and grade determine management, which should attempt to address subchondral bone loss and articular cartilage damage. Early, stable lesions are managed with rest. Surgery should be considered for unstable lesions. Most investigators advocate arthroscopic débridement with marrow stimulation. Fragment fixation and bone grafting also have provided good short-term results, but concerns persist regarding the healing potential of advanced lesions. Osteochondral autograft transplantation appears to be promising and should be reserved for larger, higher grade lesions. Clinical outcomes and return to sport are variable. Longer-term follow-up studies are necessary to fully assess surgical management, and patients must be counseled appropriately.",
"title": ""
},
{
"docid": "fe407f4983ef6cc2e257d63a173c8487",
"text": "We present a semantically rich graph representation for indoor robotic navigation. Our graph representation encodes: semantic locations such as offices or corridors as nodes, and navigational behaviors such as enter office or cross a corridor as edges. In particular, our navigational behaviors operate directly from visual inputs to produce motor controls and are implemented with deep learning architectures. This enables the robot to avoid explicit computation of its precise location or the geometry of the environment, and enables navigation at a higher level of semantic abstraction. We evaluate the effectiveness of our representation by simulating navigation tasks in a large number of virtual environments. Our results show that using a simple sets of perceptual and navigational behaviors, the proposed approach can successfully guide the way of the robot as it completes navigational missions such as going to a specific office. Furthermore, our implementation shows to be effective to control the selection and switching of behaviors.",
"title": ""
},
{
"docid": "ea982e20cc739fc88ed6724feba3d896",
"text": "We report new evidence on the emotional, demographic, and situational correlates of boredom from a rich experience sample capturing 1.1 million emotional and time-use reports from 3,867 U.S. adults. Subjects report boredom in 2.8% of the 30-min sampling periods, and 63% of participants report experiencing boredom at least once across the 10-day sampling period. We find that boredom is more likely to co-occur with negative, rather than positive, emotions, and is particularly predictive of loneliness, anger, sadness, and worry. Boredom is more prevalent among men, youths, the unmarried, and those of lower income. We find that differences in how such demographic groups spend their time account for up to one third of the observed differences in overall boredom. The importance of situations in predicting boredom is additionally underscored by the high prevalence of boredom in specific situations involving monotonous or difficult tasks (e.g., working, studying) or contexts where one's autonomy might be constrained (e.g., time with coworkers, afternoons, at school). Overall, our findings are consistent with cognitive accounts that cast boredom as emerging from situations in which engagement is difficult, and are less consistent with accounts that exclusively associate boredom with low arousal or with situations lacking in meaning. (PsycINFO Database Record",
"title": ""
},
{
"docid": "65b933f72f74a17777baa966658f4c42",
"text": "We describe the epidemic of obesity in the United States: escalating rates of obesity in both adults and children, and why these qualify as an epidemic; disparities in overweight and obesity by race/ethnicity and sex, and the staggering health and economic consequences of obesity. Physical activity contributes to the epidemic as explained by new patterns of physical activity in adults and children. Changing patterns of food consumption, such as rising carbohydrate intake--particularly in the form of soda and other foods containing high fructose corn syrup--also contribute to obesity. We present as a central concept, the food environment--the contexts within which food choices are made--and its contribution to food consumption: the abundance and ubiquity of certain types of foods over others; limited food choices available in certain settings, such as schools; the market economy of the United States that exposes individuals to many marketing/advertising strategies. Advertising tailored to children plays an important role.",
"title": ""
},
{
"docid": "8a4772e698355c463692ebcb27e68ea7",
"text": "Abstracr-Test data generation in program testing is the process of identifying a set of test data which satisfies given testing criterion. Most of the existing test data generators 161, [It], [lo], [16], [30] use symbolic evaluation to derive test data. However, in practical programs this technique frequently requires complex algebraic manipulations, especially in the presence of arrays. In this paper we present an alternative approach of test data generation which is based on actual execution of the program under test, function minimization methods, and dynamic data flow analysis. Test data are developed for the program using actual values of input variables. When the program is executed, the program execution flow is monitored. If during program execution an undesirable execution flow is observed (e.g., the “actual” path does not correspond to the selected control path) then function minimization search algorithms are used to automatically locate the values of input variables for which the selected path is traversed. In addition, dynamic data Bow analysis is used to determine those input variables responsible for the undesirable program behavior, leading to significant speedup of the search process. The approach of generating test data is then extended to programs with dynamic data structures, and a search method based on dynamic data flow analysis and backtracking is presented. In the approach described in this paper, values of array indexes and pointers are known at each step of program execution, and this approach exploits this information to overcome difficulties of array and pointer handling; as a result, the effectiveness of test data generation can be significantly improved.",
"title": ""
},
{
"docid": "f55ac9e319ad8b9782a34251007a5d06",
"text": "The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.",
"title": ""
},
{
"docid": "114e6cde6a38bcbb809f19b80110c16f",
"text": "This paper proposes a neural semantic parsing approach – Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.",
"title": ""
},
{
"docid": "7100fea85ba7c88f0281f11e7ddc04a9",
"text": "This paper reports the spoof surface plasmons polaritons (SSPPs) based multi-band bandpass filter. An efficient back to back transition from Quasi TEM mode of microstrip line to SSPP mode has been designed by etching a gradient corrugated structure on the metal strip; while keeping ground plane unaltered. SSPP wave is found to be highly confined within the teeth part of corrugation. Complementary split ring resonator has been etched in the ground plane to obtained multiband bandpass filter response. Excellent conversion from QTEM mode to SSPP mode has been observed.",
"title": ""
},
{
"docid": "25ca6416d95398eb0e79c1357dcf6554",
"text": "Bayesian Learning with Dependency Structures via Latent Factors, Mixtures, and Copulas by Shaobo Han Department of Electrical and Computer Engineering Duke University Date: Approved: Lawrence Carin, Supervisor",
"title": ""
},
{
"docid": "18fcdcadc3290f9c8dd09f0aa1a27e8f",
"text": "The Industry 4.0 is a vision that includes connecting more intensively physical systems with their virtual counterparts in computers. This computerization of manufacturing will bring many advantages, including allowing data gathering, integration and analysis in the scale not seen earlier. In this paper we describe our Semantic Big Data Historian that is intended to handle large volumes of heterogeneous data gathered from distributed data sources. We describe the approach and implementation with a special focus on using Semantic Web technologies for integrating the data.",
"title": ""
},
{
"docid": "4e91d37de7701e4a03c506c602ef3455",
"text": "This paper presents the design of Glow, a machine learning compiler for heterogeneous hardware. It is a pragmatic approach to compilation that enables the generation of highly optimized code for multiple targets. Glow lowers the traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation. The high-level intermediate representation allows the optimizer to perform domain-specific optimizations. The lower-level instruction-based address-only intermediate representation allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation and copy elimination. At the lowest level, the optimizer performs machine-specific code generation to take advantage of specialized hardware features. Glow features a lowering phase which enables the compiler to support a high number of input operators as well as a large number of hardware targets by eliminating the need to implement all operators on all targets. The lowering phase is designed to reduce the input space and allow new hardware backends to focus on a small number of linear algebra primitives.",
"title": ""
},
{
"docid": "1a9d276c4571419e0d1b297f248d874d",
"text": "Organizational culture plays a critical role in the acceptance and adoption of agile principles by a traditional software development organization (Chan & Thong, 2008). Organizations must understand the differences that exist between traditional software development principles and agile principles. Based on an analysis of the literature published between 2003 and 2010, this study examines nine distinct organizational cultural factors that require change, including management style, communication, development team practices, knowledge management, and customer interactions.",
"title": ""
},
{
"docid": "abc709735ff3566b9d3efa3bb9babd6e",
"text": "Disaster scenarios involve a multitude of obstacles that are difficult to traverse for humans and robots alike. Most robotic search and rescue solutions to this problem involve large, tank-like robots that use brute force to cross difficult terrain; however, these large robots may cause secondary damage. H.E.R.A.L.D, the Hybrid Exploration Robot for Air and Land Deployment, is a novel integrated system of three nimble, lightweight robots which can travel over difficult obstacles by air, but also travel through rubble. We present the design methodology and optimization of each robot, as well as design and testing of the physical integration of the system as a whole, and compare the performance of the robots to the state of the art.",
"title": ""
},
{
"docid": "29479201c12e99eb9802dd05cff60c36",
"text": "Exposures to air pollution in the form of particulate matter (PM) can result in excess production of reactive oxygen species (ROS) in the respiratory system, potentially causing both localized cellular injury and triggering a systemic inflammatory response. PM-induced inflammation in the lung is modulated in large part by alveolar macrophages and their biochemical signaling, including production of inflammatory cytokines, the primary mechanism via which inflammation is initiated and sustained. We developed a robust, relevant, and flexible method employing a rat alveolar macrophage cell line (NR8383) which can be applied to routine samples of PM from air quality monitoring sites to gain insight into the drivers of PM toxicity that lead to oxidative stress and inflammation. Method performance was characterized using extracts of ambient and vehicular engine exhaust PM samples. Our results indicate that the reproducibility and the sensitivity of the method are satisfactory and comparisons between PM samples can be made with good precision. The average relative percent difference for all genes detected during 10 different exposures was 17.1%. Our analysis demonstrated that 71% of genes had an average signal to noise ratio (SNR) ≥ 3. Our time course study suggests that 4 h may be an optimal in vitro exposure time for observing short-term effects of PM and capturing the initial steps of inflammatory signaling. The 4 h exposure resulted in the detection of 57 genes (out of 84 total), of which 86% had altered expression. Similarities and conserved gene signaling regulation among the PM samples were demonstrated through hierarchical clustering and other analyses. Overlying the core congruent patterns were differentially regulated genes that resulted in distinct sample-specific gene expression \"fingerprints.\" Consistent upregulation of Il1f5 and downregulation of Ccr7 was observed across all samples, while TNFα was upregulated in half of the samples and downregulated in the other half. Overall, this PM-induced cytokine expression assay could be effectively integrated into health studies and air quality monitoring programs to better understand relationships between specific PM components, oxidative stress activity and inflammatory signaling potential.",
"title": ""
},
{
"docid": "b132b6aedba7415f2ccaa3783fafd271",
"text": "Recent technologies enable electronic and RF circuits in communication devices and radar to be miniaturized and become physically smaller in size. Antenna design has been one of the key limiting constraints to the development of small communication terminals and also in meeting next generation and radar requirements. Multiple antenna technologies (MATs) have gained much attention in the last few years because of the huge gain. MATs can enhance the reliability and the channel capacity levels. Furthermore, multiple antenna systems can have a big contribution to reduce the interference both in the uplink and the downlink. To increase the communication systems reliability, multiple antennas can be installed at the transmitter or/and at the receiver. The idea behind multiple antenna diversity is to supply the receiver by multiple versions of the same signal transmitted via independent channels. In modern communication transceiver and radar systems, primary aims are to direct high power RF signal from transmitter to antenna while preventing leakage of that large signal into more sensitive frontend of receiver. So, a Single-Pole Double-Throw (SPDT) Transmitter/Receiver (T/R) Switch plays an important role. In this paper, design of smart distributed subarray MIMO (DS-MIMO) microstrip antenna system with controller unit and frequency agile has been introduced and investigated. All the entire proposed antenna system has been evaluated using a commercial software. The final proposed design has been fabricated and the radiation characteristics have been illustrated using network analyzer to meet the requirements for communication and radar applications.",
"title": ""
},
{
"docid": "074d4a552c82511d942a58b93d51c38a",
"text": "This is a survey of neural network applications in the real-world scenario. It provides a taxonomy of artificial neural networks (ANNs) and furnish the reader with knowledge of current and emerging trends in ANN applications research and area of focus for researchers. Additionally, the study presents ANN application challenges, contributions, compare performances and critiques methods. The study covers many applications of ANN techniques in various disciplines which include computing, science, engineering, medicine, environmental, agriculture, mining, technology, climate, business, arts, and nanotechnology, etc. The study assesses ANN contributions, compare performances and critiques methods. The study found that neural-network models such as feedforward and feedback propagation artificial neural networks are performing better in its application to human problems. Therefore, we proposed feedforward and feedback propagation ANN models for research focus based on data analysis factors like accuracy, processing speed, latency, fault tolerance, volume, scalability, convergence, and performance. Moreover, we recommend that instead of applying a single method, future research can focus on combining ANN models into one network-wide application.",
"title": ""
},
{
"docid": "55a6c14a7445b1903223f59ad4ad9b77",
"text": "Energy and environmental issues are among the major concerns facing the global community today. Transportation fuel represents a large proportion of energy consumption, not only in the US, but also worldwide. As fossil fuel is being depleted, new substitutes are needed to provide energy. Ethanol, which has been produced mainly from the fermentation of corn starch in the US, has been regarded as one of the main liquid transportation fuels that can take the place of fossil fuel. However, limitations in the supply of starch are creating a need for different substrates. Forest biomass is believed to be one of the most abundant sources of sugars, although much research has been reported on herbaceous grass, agricultural residue, and municipal waste. The use of biomass sugars entails pretreatment to disrupt the lignin-carbohydrate complex and expose carbohydrates to enzymes. This paper reviews pretreatment technologies from the perspective of their potential use with wood, bark, and forest residues. Acetic acid catalysis is suggested for the first time to be used in steam explosion pretreatment. Its pretreatment economics, as well as that for ammonia fiber explosion pretreatment, is estimated. This analysis suggests that both are promising techniques worthy of further exploration or optimization for commercialization.",
"title": ""
},
{
"docid": "a576a6bf249616d186657a48c2aec071",
"text": "Penumbras, or soft shadows, are an important means to enhance the realistic ap pearance of computer generated images. We present a fast method based on Minkowski operators to reduce t he run ime for penumbra calculation with stochastic ray tracing. Detailed run time analysis on some examples shows that the new method is significantly faster than the conventional approach. Moreover, it adapts to the environment so that small penumbras are calculated faster than larger ones. The algorithm needs at most twice as much memory as the underlying ray tracing algorithm.",
"title": ""
},
{
"docid": "16dae5a68647c9a8aa93b900eb470eb4",
"text": "Saving power in datacenter networks has become a pressing issue. ElasticTree and CARPO fat-tree networks have recently been proposed to reduce power consumption by using sleep mode during the operation stage of the network. In this paper, we address the design stage where the right switch size is evaluated to maximize power saving during the expected operation of the network. Our findings reveal that deploying a large number of small switches is more power-efficient than a small number of large switches when the traffic demand is relatively moderate or when servers exchanging traffic are in close proximity. We also discuss the impact of sleep mode on performance such as packet delay and loss.",
"title": ""
}
] |
scidocsrr
|
2e9015433f83b79fb13724ffacc0bdad
|
Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability
|
[
{
"docid": "ad7f49832562d27534f11b162e28f51b",
"text": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.",
"title": ""
}
] |
[
{
"docid": "45eb2d7b74f485e9eeef584555e38316",
"text": "With the increasing demand of massive multimodal data storage and organization, cross-modal retrieval based on hashing technique has drawn much attention nowadays. It takes the binary codes of one modality as the query to retrieve the relevant hashing codes of another modality. However, the existing binary constraint makes it difficult to find the optimal cross-modal hashing function. Most approaches choose to relax the constraint and perform thresholding strategy on the real-value representation instead of directly solving the original objective. In this paper, we first provide a concrete analysis about the effectiveness of multimodal networks in preserving the inter- and intra-modal consistency. Based on the analysis, we provide a so-called Deep Binary Reconstruction (DBRC) network that can directly learn the binary hashing codes in an unsupervised fashion. The superiority comes from a proposed simple but efficient activation function, named as Adaptive Tanh (ATanh). The ATanh function can adaptively learn the binary codes and be trained via back-propagation. Extensive experiments on three benchmark datasets demonstrate that DBRC outperforms several state-of-the-art methods in both image2text and text2image retrieval task.",
"title": ""
},
{
"docid": "23cc8b190e9de5177cccf2f918c1ad45",
"text": "NFC is a standardised technology providing short-range RFID communication channels for mobile devices. Peer-to-peer applications for mobile devices are receiving increased interest and in some cases these services are relying on NFC communication. It has been suggested that NFC systems are particularly vulnerable to relay attacks, and that the attacker’s proxy devices could even be implemented using off-the-shelf NFC-enabled devices. This paper describes how a relay attack can be implemented against systems using legitimate peer-to-peer NFC communication by developing and installing suitable MIDlets on the attacker’s own NFC-enabled mobile phones. The attack does not need to access secure program memory nor use any code signing, and can use publicly available APIs. We go on to discuss how relay attack countermeasures using device location could be used in the mobile environment. These countermeasures could also be applied to prevent relay attacks on contactless applications using ‘passive’ NFC on mobile phones.",
"title": ""
},
{
"docid": "e94afab2ce61d7426510a5bcc88f7ca8",
"text": "Community detection is an important task in network analysis, in which we aim to learn a network partition that groups together vertices with similar community-level connectivity patterns. By finding such groups of vertices with similar structural roles, we extract a compact representation of the network’s large-scale structure, which can facilitate its scientific interpretation and the prediction of unknown or future interactions. Popular approaches, including the stochastic block model, assume edges are unweighted, which limits their utility by discarding potentially useful information. We introduce the weighted stochastic block model (WSBM), which generalizes the stochastic block model to networks with edge weights drawn from any exponential family distribution. This model learns from both the presence and weight of edges, allowing it to discover structure that would otherwise be hidden when weights are discarded or thresholded. We describe a Bayesian variational algorithm for efficiently approximating this model’s posterior distribution over latent block structures. We then evaluate the WSBM’s performance on both edge-existence and edge-weight prediction tasks for a set of real-world weighted networks. In all cases, the WSBM performs as well or better than the best alternatives on these tasks. community detection, weighted relational data, block models, exponential family, variational Bayes.",
"title": ""
},
{
"docid": "de99a984795645bc2e9fb4b3e3173807",
"text": "Neural networks are a family of powerful machine learning models. is book focuses on the application of neural network models to natural language data. e first half of the book (Parts I and II) covers the basics of supervised machine learning and feed-forward neural networks, the basics of working with machine learning over language data, and the use of vector-based rather than symbolic representations for words. It also covers the computation-graph abstraction, which allows to easily define and train arbitrary neural networks, and is the basis behind the design of contemporary neural network software libraries. e second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. ese architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning.",
"title": ""
},
{
"docid": "2be58a0a458115fb9ef00627cc0580e0",
"text": "OBJECTIVE\nTo determine the physical and psychosocial impact of macromastia on adolescents considering reduction mammaplasty in comparison with healthy adolescents.\n\n\nMETHODS\nThe following surveys were administered to adolescents with macromastia and control subjects, aged 12 to 21 years: Short-Form 36v2, Rosenberg Self-Esteem Scale, Breast-Related Symptoms Questionnaire, and Eating-Attitudes Test-26 (EAT-26). Demographic variables and self-reported breast symptoms were compared between the 2 groups. Linear regression models, unadjusted and adjusted for BMI category (normal weight, overweight, obese), were fit to determine the effect of case status on survey score. Odds ratios for the risk of disordered eating behaviors (EAT-26 score ≥ 20) in cases versus controls were also determined.\n\n\nRESULTS\nNinety-six subjects with macromastia and 103 control subjects participated in this study. Age was similar between groups, but subjects with macromastia had a higher BMI (P = .02). Adolescents with macromastia had lower Short-Form 36v2 domain, Rosenberg Self-Esteem Scale, and Breast-Related Symptoms Questionnaire scores and higher EAT-26 scores compared with controls. Macromastia was also associated with a higher risk of disordered eating behaviors. In almost all cases, the impact of macromastia was independent of BMI category.\n\n\nCONCLUSIONS\nMacromastia has a substantial negative impact on health-related quality of life, self-esteem, physical symptoms, and eating behaviors in adolescents with this condition. These observations were largely independent of BMI category. Health care providers should be aware of these important negative health outcomes that are associated with macromastia and consider early evaluation for adolescents with this condition.",
"title": ""
},
{
"docid": "d43dc521d3f0f17ccd4840d6081dcbfe",
"text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.",
"title": ""
},
{
"docid": "447c5b2db5b1d7555cba2430c6d73a35",
"text": "Recent years have seen a proliferation of complex Advanced Driver Assistance Systems (ADAS), in particular, for use in autonomous cars. These systems consist of sensors and cameras as well as image processing and decision support software components. They are meant to help drivers by providing proper warnings or by preventing dangerous situations. In this paper, we focus on the problem of design time testing of ADAS in a simulated environment. We provide a testing approach for ADAS by combining multi-objective search with surrogate models developed based on neural networks. We use multi-objective search to guide testing towards the most critical behaviors of ADAS. Surrogate modeling enables our testing approach to explore a larger part of the input search space within limited computational resources. We characterize the condition under which the multi-objective search algorithm behaves the same with and without surrogate modeling, thus showing the accuracy of our approach. We evaluate our approach by applying it to an industrial ADAS system. Our experiment shows that our approach automatically identifies test cases indicating critical ADAS behaviors. Further, we show that combining our search algorithm with surrogate modeling improves the quality of the generated test cases, especially under tight and realistic computational resources.",
"title": ""
},
{
"docid": "47bf54c0d51596f39929e8f3e572a051",
"text": "Parameterizations of triangulated surfaces are used in an increasing number of mesh processing applications for various purposes. Although demands vary, they are often required to preserve the surface metric and thus minimize angle, area and length deformation. However, most of the existing techniques primarily target at angle preservation while disregarding global area deformation. In this paper an energy functional is proposed, that quantifies angle and global area deformations simultaneously, while the relative importance between angle and area preservation can be controlled by the user through a parameter. We show how this parameter can be chosen to obtain parameterizations, that are optimized for an uniform sampling of the surface of a model. Maps obtained by minimizing this energy are well suited for applications that desire an uniform surface sampling, like re-meshing or mapping regularly patterned textures. Besides being invariant under rotation and translation of the domain, the energy is designed to prevent face flips during minimization and does not require a fixed boundary in the parameter domain. Although the energy is nonlinear, we show how it can be minimized efficiently using non-linear conjugate gradient methods in a hierarchical optimization framework and prove the convergence of the algorithm. The ability to control the tradeoff between the degree of angle and global area preservation is demonstrated for several models of varying complexity.",
"title": ""
},
{
"docid": "e1bee61b205d29db6b2ebbaf95e9c20b",
"text": "Despite the fact that there are thousands of programming languages existing there is a huge controversy about what language is better to solve a particular problem. In this paper we discuss requirements for programming language with respect to AGI research. In this article new language will be presented. Unconventional features (e.g. probabilistic programming and partial evaluation) are discussed as important parts of language design and implementation. Besides, we consider possible applications to particular problems related to AGI. Language interpreter for Lisp-like probabilistic mixed paradigm programming language is implemented in Haskell.",
"title": ""
},
{
"docid": "3a1019c31ff34f8a45c65703c1528fc4",
"text": "The increasing trend of studying the innate softness of robotic structures and amalgamating it with the benefits of the extensive developments in the field of embodied intelligence has led to sprouting of a relatively new yet extremely rewarding sphere of technology. The fusion of current deep reinforcement algorithms with physical advantages of a soft bio-inspired structure certainly directs us to a fruitful prospect of designing completely self-sufficient agents that are capable of learning from observations collected from their environment to achieve a task they have been assigned. For soft robotics structure possessing countless degrees of freedom, it is often not easy (something not even possible) to formulate mathematical constraints necessary for training a deep reinforcement learning (DRL) agent for the task in hand, hence, we resolve to imitation learning techniques due to ease of manually performing such tasks like manipulation that could be comfortably mimicked by our agent. Deploying current imitation learning algorithms on soft robotic systems have been observed to provide satisfactory results but there are still challenges in doing so. This review article thus posits an overview of various such algorithms along with instances of them being applied to real world scenarios and yielding state-of-the-art results followed by brief descriptions on various pristine branches of DRL research that may be centers of future research in this field of interest.",
"title": ""
},
{
"docid": "4d73c50244d16dab6d3773dbeebbae98",
"text": "We describe the latest version of Microsoft's conversational speech recognition system for the Switchboard and CallHome domains. The system adds a CNN-BLSTM acoustic model to the set of model architectures we combined previously, and includes character-based and dialog session aware LSTM language models in rescoring. For system combination we adopt a two-stage approach, whereby acoustic model posteriors are first combined at the senone/frame level, followed by a word-level voting via confusion networks. We also added another language model rescoring step following the confusion network combination. The resulting system yields a 5.1% word error rate on the NIST 2000 Switchboard test set, and 9.8% on the CallHome subset.",
"title": ""
},
{
"docid": "79ad27cffbbcbe3a49124abd82c6e477",
"text": "In this paper we address the following problem in web document and information retrieval (IR): How can we use long-term context information to gain better IR performance? Unlike common IR methods that use bag of words representation for queries and documents, we treat them as a sequence of words and use long short term memory (LSTM) to capture contextual dependencies. To the best of our knowledge, this is the first time that LSTM is applied to information retrieval tasks. Unlike training traditional LSTMs, the training strategy is different due to the special nature of information retrieval problem. Experimental evaluation on an IR task derived from the Bing web search demonstrates the ability of the proposed method in addressing both lexical mismatch and long-term context modelling issues, thereby, significantly outperforming existing state of the art methods for web document retrieval task.",
"title": ""
},
{
"docid": "b0766f310c4926b475bb646911a27f34",
"text": "Currently, two frameworks of causal reasoning compete: Whereas dependency theories focus on dependencies between causes and effects, dispositional theories model causation as an interaction between agents and patients endowed with intrinsic dispositions. One important finding providing a bridge between these two frameworks is that failures of causes to generate their effects tend to be differentially attributed to agents and patients regardless of their location on either the cause or the effect side. To model different types of error attribution, we augmented a causal Bayes net model with separate error sources for causes and effects. In several experiments, we tested this new model using the size of Markov violations as the empirical indicator of differential assumptions about the sources of error. As predicted by the model, the size of Markov violations was influenced by the location of the agents and was moderated by the causal structure and the type of causal variables.",
"title": ""
},
{
"docid": "569700bd1114b1b93a13af25b2051631",
"text": "Empathy and sympathy play crucial roles in much of human social interaction and are necessary components for healthy coexistence. Sympathy is thought to be a proxy for motivating prosocial behavior and providing the affective and motivational base for moral development. The purpose of the present study was to use functional MRI to characterize developmental changes in brain activation in the neural circuits underpinning empathy and sympathy. Fifty-seven individuals, whose age ranged from 7 to 40 years old, were presented with short animated visual stimuli depicting painful and non-painful situations. These situations involved either a person whose pain was accidentally caused or a person whose pain was intentionally inflicted by another individual to elicit empathic (feeling as the other) or sympathetic (feeling concern for the other) emotions, respectively. Results demonstrate monotonic age-related changes in the amygdala, supplementary motor area, and posterior insula when participants were exposed to painful situations that were accidentally caused. When participants observed painful situations intentionally inflicted by another individual, age-related changes were detected in the dorsolateral prefrontal and ventromedial prefrontal cortex, with a gradual shift in that latter region from its medial to its lateral portion. This pattern of activation reflects a change from a visceral emotional response critical for the analysis of the affective significance of stimuli to a more evaluative function. Further, these data provide evidence for partially distinct neural mechanisms subserving empathy and sympathy, and demonstrate the usefulness of a developmental neurobiological approach to the new emerging area of moral neuroscience.",
"title": ""
},
{
"docid": "023302562ddfe48ac81943fedcf881b7",
"text": "Knitty is an interactive design system for creating knitted animals. The user designs a 3D surface model using a sketching interface. The system automatically generates a knitting pattern and then visualizes the shape of the resulting 3D animal model by applying a simple physics simulation. The user can see the resulting shape before beginning the actual knitting. The system also provides a production assistant interface for novices. The user can easily understand how to knit each stitch and what to do in each step. In a workshop for novices, we observed that even children can design their own knitted animals using our system.",
"title": ""
},
{
"docid": "691032ab4d9bcc1f536b1b8a5d8e73ae",
"text": "Many decisions must be made under stress, and many decision situations elicit stress responses themselves. Thus, stress and decision making are intricately connected, not only on the behavioral level, but also on the neural level, i.e., the brain regions that underlie intact decision making are regions that are sensitive to stress-induced changes. The purpose of this review is to summarize the findings from studies that investigated the impact of stress on decision making. The review includes those studies that examined decision making under stress in humans and were published between 1985 and October 2011. The reviewed studies were found using PubMed and PsycInfo searches. The review focuses on studies that have examined the influence of acutely induced laboratory stress on decision making and that measured both decision-making performance and stress responses. Additionally, some studies that investigated decision making under naturally occurring stress levels and decision-making abilities in patients who suffer from stress-related disorders are described. The results from the studies that were included in the review support the assumption that stress affects decision making. If stress confers an advantage or disadvantage in terms of outcome depends on the specific task or situation. The results also emphasize the role of mediating and moderating variables. The results are discussed with respect to underlying psychological and neural mechanisms, implications for everyday decision making and future research directions.",
"title": ""
},
{
"docid": "ea765da47c4280f846fe144570a755dc",
"text": "A new nonlinear noise reduction method is presented that uses the discrete wavelet transform. Similar to Donoho (1995) and Donohoe and Johnstone (1994, 1995), the authors employ thresholding in the wavelet transform domain but, following a suggestion by Coifman, they use an undecimated, shift-invariant, nonorthogonal wavelet transform instead of the usual orthogonal one. This new approach can be interpreted as a repeated application of the original Donoho and Johnstone method for different shifts. The main feature of the new algorithm is a significantly improved noise reduction compared to the original wavelet based approach. This holds for a large class of signals, both visually and in the l/sub 2/ sense, and is shown theoretically as well as by experimental results.",
"title": ""
},
{
"docid": "4427f79777bfe5ea1617f06a5aa6f0cc",
"text": "Despite decades of sustained effort, memory corruption attacks continue to be one of the most serious security threats faced today. They are highly sought after by attackers, as they provide ultimate control --- the ability to execute arbitrary low-level code. Attackers have shown time and again their ability to overcome widely deployed countermeasures such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) by crafting Return Oriented Programming (ROP) attacks. Although Turing-complete ROP attacks have been demonstrated in research papers, real-world ROP payloads have had a more limited objective: that of disabling DEP so that injected native code attacks can be carried out. In this paper, we provide a systematic defense, called Control Flow and Code Integrity (CFCI), that makes injected native code attacks impossible. CFCI achieves this without sacrificing compatibility with existing software, the need to replace system programs such as the dynamic loader, and without significant performance penalty. We will release CFCI as open-source software by the time of this conference.",
"title": ""
},
{
"docid": "3969a0156c558020ca1de3b978c3ab4e",
"text": "Silver-Russell syndrome (SRS) and Beckwith-Wiedemann syndrome (BWS) are 2 clinically opposite growth-affecting disorders belonging to the group of congenital imprinting disorders. The expression of both syndromes usually depends on the parental origin of the chromosome in which the imprinted genes reside. SRS is characterized by severe intrauterine and postnatal growth retardation with various additional clinical features such as hemihypertrophy, relative macrocephaly, fifth finger clinodactyly, and triangular facies. BWS is an overgrowth syndrome with many additional clinical features such as macroglossia, organomegaly, and an increased risk of childhood tumors. Both SRS and BWS are clinically and genetically heterogeneous, and for clinical diagnosis, different diagnostic scoring systems have been developed. Six diagnostic scoring systems for SRS and 4 for BWS have been previously published. However, neither syndrome has common consensus diagnostic criteria yet. Most cases of SRS and BWS are associated with opposite epigenetic or genetic abnormalities in the 11p15 chromosomal region leading to opposite imbalances in the expression of imprinted genes. SRS is also caused by maternal uniparental disomy 7, which is usually identified in 5-10% of the cases, and is therefore the first imprinting disorder that affects 2 different chromosomes. In this review, we describe in detail the clinical diagnostic criteria and scoring systems as well as molecular causes in both SRS and BWS.",
"title": ""
},
{
"docid": "65aa27cc08fd1f3532f376b536c452ba",
"text": "Design work and design knowledge in Information Systems (IS) is important for both research and practice. Yet there has been comparatively little critical attention paid to the problem of specifying design theory so that it can be communicated, justified, and developed cumulatively. In this essay we focus on the structural components or anatomy of design theories in IS as a special class of theory. In doing so, we aim to extend the work of Walls, Widemeyer and El Sawy (1992) on the specification of information systems design theories (ISDT), drawing on other streams of thought on design research and theory to provide a basis for a more systematic and useable formulation of these theories. We identify eight separate components of design theories: (1) purpose and scope, (2) constructs, (3) principles of form and function, (4) artifact mutability, (5) testable propositions, (6) justificatory knowledge (kernel theories), (7) principles of implementation, and (8) an expository instantiation. This specification includes components missing in the Walls et al. adaptation of Dubin (1978) and Simon (1969) and also addresses explicitly problems associated with the role of instantiations and the specification of design theories for methodologies and interventions as well as for products and applications. The essay is significant as the unambiguous establishment of design knowledge as theory gives a sounder base for arguments for the rigor and legitimacy of IS as an applied discipline and for its continuing progress. A craft can proceed with the copying of one example of a design artifact by one artisan after another. A discipline cannot.",
"title": ""
}
] |
scidocsrr
|
50f81e8fabd9783c3cc3dce1dab44e5c
|
A Software Defined Fog Node Based Distributed Blockchain Cloud Architecture for IoT
|
[
{
"docid": "6ae63f854dcc8ecee76cfd5812506895",
"text": "The inherent characteristics of Internet of Things (IoT) devices, such as limited storage and computational power, require a new platform to efficiently process data. The concept of fog computing has been introduced as a technology to bridge the gap between remote data centers and IoT devices. Fog computing enables a wide range of benefits, including enhanced security, decreased bandwidth, and reduced latency. These benefits make the fog an appropriate paradigm for many IoT services in various applications such as connected vehicles and smart grids. Nevertheless, fog devices (located at the edge of the Internet) obviously face many security and privacy threats, much the same as those faced by traditional data centers. In this article, the authors discuss the security and privacy issues in IoT environments and propose a mechanism that employs fog to improve the distribution of certificate revocation information among IoT devices for security enhancement. They also present potential research directions aimed at using fog computing to enhance the security and privacy issues in IoT environments.",
"title": ""
},
{
"docid": "bda2541d2c2a5a5047b29972cb1536f6",
"text": "Fog is an emergent architecture for computing, storage, control, and networking that distributes these services closer to end users along the cloud-to-things continuum. It covers both mobile and wireline scenarios, traverses across hardware and software, resides on network edge but also over access networks and among end users, and includes both data plane and control plane. As an architecture, it supports a growing variety of applications, including those in the Internet of Things (IoT), fifth-generation (5G) wireless systems, and embedded artificial intelligence (AI). This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT.",
"title": ""
}
] |
[
{
"docid": "aa7029c5e29a72a8507cbcb461ef92b0",
"text": "Regenerative endodontics has been defined as \"biologically based procedure designed to replace damaged structures, including dentin and root structures, as well as cells of the pulp-dentin complex.\" This is an exciting and rapidly evolving field of human endodontics for the treatment of immature permanent teeth with infected root canal systems. These procedures have shown to be able not only to resolve pain and apical periodontitis but continued root development, thus increasing the thickness and strength of the previously thin and fracture-prone roots. In the last decade, over 80 case reports, numerous animal studies, and series of regenerative endodontic cases have been published. However, even with multiple successful case reports, there are still some remaining questions regarding terminology, patient selection, and procedural details. Regenerative endodontics provides the hope of converting a nonvital tooth into vital one once again.",
"title": ""
},
{
"docid": "945ead15b96ed06a15b12372b4787fcf",
"text": "We describe the development and testing of ab initio derived, AMBER ff03 compatible charge parameters for a large library of 147 noncanonical amino acids including β- and N-methylated amino acids for use in applications such as protein structure prediction and de novo protein design. The charge parameter derivation was performed using the RESP fitting approach. Studies were performed assessing the suitability of the derived charge parameters in discriminating the activity/inactivity between 63 analogs of the complement inhibitor Compstatin on the basis of previously published experimental IC50 data and a screening procedure involving short simulations and binding free energy calculations. We found that both the approximate binding affinity (K*) and the binding free energy calculated through MM-GBSA are capable of discriminating between active and inactive Compstatin analogs, with MM-GBSA performing significantly better. Key interactions between the most potent Compstatin analog that contains a noncanonical amino acid are presented and compared to the most potent analog containing only natural amino acids and native Compstatin. We make the derived parameters and an associated web interface that is capable of performing modifications on proteins using Forcefield_NCAA and outputting AMBER-ready topology and parameter files freely available for academic use at http://selene.princeton.edu/FFNCAA . The forcefield allows one to incorporate these customized amino acids into design applications with control over size, van der Waals, and electrostatic interactions.",
"title": ""
},
{
"docid": "6fb0459adccd26015ee39897da52d349",
"text": "Each year, thousands of software vulnerabilities are discovered and reported to the public. Unpatched known vulnerabilities are a significant security risk. It is imperative that software vendors quickly provide patches once vulnerabilities are known and users quickly install those patches as soon as they are available. However, most vulnerabilities are never actually exploited. Since writing, testing, and installing software patches can involve considerable resources, it would be desirable to prioritize the remediation of vulnerabilities that are likely to be exploited. Several published research studies have reported moderate success in applying machine learning techniques to the task of predicting whether a vulnerability will be exploited. These approaches typically use features derived from vulnerability databases (such as the summary text describing the vulnerability) or social media posts that mention the vulnerability by name. However, these prior studies share multiple methodological shortcomings that inflate predictive power of these approaches. We replicate key portions of the prior work, compare their approaches, and show how selection of training and test data critically affect the estimated performance of predictive models. The results of this study point to important methodological considerations that should be taken into account so that results reflect real-world utility.",
"title": ""
},
{
"docid": "1af3ac7c85fbb902f419ec4776a1c571",
"text": "Traditional approaches to understanding the brain's resilience to neuropathology have identified neurophysiological variables, often described as brain or cognitive \"reserve,\" associated with better outcomes. However, mechanisms of function and resilience in large-scale brain networks remain poorly understood. Dynamic network theory may provide a basis for substantive advances in understanding functional resilience in the human brain. In this perspective, we describe recent theoretical approaches from network control theory as a framework for investigating network level mechanisms underlying cognitive function and the dynamics of neuroplasticity in the human brain. We describe the theoretical opportunities offered by the application of network control theory at the level of the human connectome to understand cognitive resilience and inform translational intervention.",
"title": ""
},
{
"docid": "b4103e5ddc58672334b66cc504dab5a6",
"text": "An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as DUPLICATE and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone.",
"title": ""
},
{
"docid": "7e5d83af3c6496e41c19b36b2392f076",
"text": "JavaScript is an interpreted programming language most often used for enhancing webpage interactivity and functionality. It has powerful capabilities to interact with webpage documents and browser windows, however, it has also opened the door for many browser-based security attacks. Insecure engineering practices of using JavaScript may not directly lead to security breaches, but they can create new attack vectors and greatly increase the risks of browser-based attacks. In this article, we present the first measurement study on insecure practices of using JavaScript on the Web. Our focus is on the insecure practices of JavaScript inclusion and dynamic generation, and we examine their severity and nature on 6,805 unique websites. Our measurement results reveal that insecure JavaScript practices are common at various websites: (1) at least 66.4% of the measured websites manifest the insecure practices of including JavaScript files from external domains into the top-level documents of their webpages; (2) over 44.4% of the measured websites use the dangerous eval() function to dynamically generate and execute JavaScript code on their webpages; and (3) in JavaScript dynamic generation, using the document.write() method and the innerHTML property is much more popular than using the relatively secure technique of creating script elements via DOM methods. Our analysis indicates that safe alternatives to these insecure practices exist in common cases and ought to be adopted by website developers and administrators for reducing potential security risks.",
"title": ""
},
{
"docid": "4f3177b303b559f341b7917683114257",
"text": "We investigate the integration of a planning mechanism into sequence-to-sequence models using attention. We develop a model which can plan ahead in the future when it computes its alignments between input and output sequences, constructing a matrix of proposed future alignments and a commitment vector that governs whether to follow or recompute the plan. This mechanism is inspired by the recently proposed strategic attentive reader and writer (STRAW) model for Reinforcement Learning. Our proposed model is end-to-end trainable using primarily differentiable operations. We show that it outperforms a strong baseline on character-level translation tasks from WMT’15, the algorithmic task of finding Eulerian circuits of graphs, and question generation from the text. Our analysis demonstrates that the model computes qualitatively intuitive alignments, converges faster than the baselines, and achieves superior performance with fewer parameters.",
"title": ""
},
{
"docid": "59f32005b0debc0241e68a855c486634",
"text": "To extract patterns from neuroimaging data, various techniques, including statistical methods and machine learning algorithms, have been explored to ultimately aid in Alzheimer's disease diagnosis of older adults in both clinical and research applications. However, identifying the distinctions between Alzheimer's brain data and healthy brain data in older adults (age > 75) is challenging due to highly similar brain patterns and image intensities. Recently, cutting-edge deep learning technologies have been rapidly expanding into numerous fields, including medical image analysis. This work outlines state-of-the-art deep learning-based pipelines employed to distinguish Alzheimer's magnetic resonance imaging (MRI) and functional MRI data from normal healthy control data for the same age group. Using these pipelines, which were executed on a GPU-based high performance computing platform, the data were strictly and carefully preprocessed. Next, scale and shift invariant lowto high-level features were obtained from a high volume of training images using convolutional neural network (CNN) architecture. In this study, functional MRI data were used for the first time in deep learning applications for the purposes of medical image analysis and Alzheimer's disease prediction. These proposed and implemented pipelines, which demonstrate a significant improvement in classification output when compared to other studies, resulted in high and reproducible accuracy rates of 99.9% and 98.84% for the fMRI and MRI pipelines, respectively.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "f02d087b0d51fda5873e9582bc8d652c",
"text": "Morfessor is a family of probabilistic machine learning methods for finding the morphological segmentation from raw text data. Recent developments include the development of semi-supervised methods for utilizing annotated data. Morfessor 2.0 is a rewrite of the original, widely-used Morfessor 1.0 software, with well documented command-line tools and library interface. It includes new features such as semi-supervised learning, online training, and integrated evaluation code.",
"title": ""
},
{
"docid": "0fca0826e166ddbd4c26fe16086ff7ec",
"text": "Enteric redmouth disease (ERM) is a serious septicemic bacterial disease of salmonid fish species. It is caused by Yersinia ruckeri, a Gram-negative rod-shaped enterobacterium. It has a wide host range, broad geographical distribution, and causes significant economic losses in the fish aquaculture industry. The disease gets its name from the subcutaneous hemorrhages, it can cause at the corners of the mouth and in gums and tongue. Other clinical signs include exophthalmia, darkening of the skin, splenomegaly and inflammation of the lower intestine with accumulation of thick yellow fluid. The bacterium enters the fish via the secondary gill lamellae and from there it spreads to the blood and internal organs. Y. ruckeri can be detected by conventional biochemical, serological and molecular methods. Its genome is 3.7 Mb with 3406-3530 coding sequences. Several important virulence factors of Y. ruckeri have been discovered, including haemolyin YhlA and metalloprotease Yrp1. Both non-specific and specific immune responses of fish during the course of Y. ruckeri infection have been well characterized. Several methods of vaccination have been developed for controlling both biotype 1 and biotype 2 Y. ruckeri strains in fish. This review summarizes the current state of knowledge regarding enteric redmouth disease and Y. ruckeri: diagnosis, genome, virulence factors, interaction with the host immune responses, and the development of vaccines against this pathogen.",
"title": ""
},
{
"docid": "d2f4159b73f6baf188d49c43e6215262",
"text": "In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al., April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al., 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al., 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors.",
"title": ""
},
{
"docid": "65ac52564041b0c2e173560d49ec762f",
"text": "Constructionism can be a powerful framework for teaching complex content to novices. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn this content in contextualized, personally-meaningful ways. In this paper, we investigate the relevance of a set of approaches broadly called “educational data mining” or “learning analytics” (henceforth, EDM) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. We suggest that EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition but also to wider communities. Finally, we explore potential collaborations between researchers in the EDM and constructionist traditions; such collaborations have the potential to enhance the ability of constructionist researchers to make rich inference about learning and learners, while providing EDM researchers with many interesting new research questions and challenges. In recent years, project-based, student-centered approaches to education have gained prominence, due in part to an increased demand for higher-level skills in the job market (Levi and Murname, 2004), positive research findings on the effectiveness of such approaches (Barron, Pearson, et al., 2008), and a broader acceptance in public policy circles, as shown, for example, by the Next Generation Science Standards (NGSS Lead States, 2013). While several approaches for this type of learning exist, Constructionism is one of the most popular and well-developed ones (Papert, 1980). In this paper, we investigate the relevance of a set of approaches called “educational data mining” or “learning analytics” (henceforth abbreviated as ‘EDM’) (R. Baker & Yacef, 2009; Romero & Ventura, 2010a; R. Baker & Siemens, in press) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. As such, EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition and to the wider community of learning scientists and policymakers. EDM, broadly, is a set of methods that apply data mining and machine learning techniques such as prediction, classification, and discovery of latent structural regularities to rich, voluminous, and idiosyncratic educational data, potentially similar to those data generated by many constructionist learning environments which allows students to explore and build their own artifacts, computer programs, and media pieces. As such, we identify four axes in which EDM methods may be helpful for constructionist research: 1. EDM methods do not require constructionists to abandon deep qualitative analysis for simplistic summative or confirmatory quantitative analysis; 2. EDM methods can generate different and complementary new analyses to support qualitative research; 3. By enabling precise formative assessments of complex constructs, EDM methods can support an increase in methodological rigor and replicability; 4. EDM can be used to present comprehensible and actionable data to learners and teachers in situ. In order to investigate those axes, we start by describing our perspective on compatibilities and incompatibilities between constructionism and EDM. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn that complex content in connected, meaningful ways. Constructionist projects often emphasize making those artifacts (and often data) public, socially relevant, and personally meaningful to learners, and encourage working in social spaces such that learners engage each other to accelerate the learning process. diSessa and Cobb (2004) argue that constructionism serves a framework for action, as it describes its own praxis (i.e., how it matches theory to practice). The learning theory supporting constructionism is classically constructivist, combining concepts from Piaget and Vygotsky (Fosnot, 2005). As constructionism matures as a constructivist framework for action and expands in scale, constructionist projects are becoming both more complex (Reynolds & Caperton, 2011), more scalable (Resnick, Maloney, et al., 2009), and more affordable for schools following significant development in low cost “construction” technologies such as robotics and 3D printers. As such, there have been increasing opportunities to learn more about how students learn in constructionist contexts, advancing the science of learning. These discoveries will have the potential to improve the quality of all constructivist learning experiences. For example, Wilensky and Reisman (2006) have shown how constructionist modeling and simulation can make science learning more accessible, Resnick (1998) has shown how constructionism can reframe programming as art at scale, Buechley & Eisenberg (2008) have used e-textiles to engage female students in robotics, Eisenberg (2011) and Blikstein (2013, 2014) use constructionist digital fabrication to successfully teach programming, engineering, and electronics in a novel, integrated way. The findings of these research and design projects have the potential to be useful to a wide external community of teachers, researchers, practitioners, and other stakeholders. However, connecting findings from the constructionist tradition to the goals of policymakers can be challenging, due to the historical differences in methodology and values between these communities. The resources needed to study such interventions at scale are considerable, given the need to carefully document, code, and analyze each student’s work processes and artifacts. The designs of constructionist research often result in findings that do not map to what researchers, outside interests, and policymakers are expecting, in contrast to conventional controlled studies, which are designed to (more conclusively) answer a limited set of sharply targeted research questions. Due the lack of a common ground to discuss benefits and scalability of constructionist and project-based designs, these designs have been too frequently sidelined to niche institutions such as private schools, museums, or atypical public schools. To understand what the role EDM methods can play in constructionist research, we must frame what we mean by constructionist research more precisely. We follow Papert and Harel (1991) in their situating of constructionism, but they do not constrain the term to one formal definition. The definition is further complicated by the fact that constructionism has many overlaps with other research and design traditions, such as constructivism and socio-constructivism themselves, as well as project-based pedagogies and inquiry-based designs. However, we believe that it is possible to define the subset of constructionism amenable to EDM, a focus we adopt in this article for brevity. In this paper, we focus on the constructionist literature dealing with students learning to construct understandings by constructing (physical or virtual) artifacts, where the students' learning environments are designed and constrained such that building artifacts in/with that environment is designed to help students construct their own understandings. In other words, we are focusing on creative work done in computational environments designed to foster creative and transformational learning, such as NetLogo (Wilensky, 1999), Scratch (Resnick, Maloney, et al., 2009), or LEGO Mindstorms. This sub-category of constructionism can and does generate considerable formative and summative data. It also has the benefit of having a history of success in the classroom. From Papert’s seminal (1972) work through today, constructionist learning has been shown to promote the development of deep understanding of relatively complex content, with many examples ranging from mathematics (Harel, 1990; Wilensky, 1996) to history (Zahn, Krauskopf, Hesse, & Pea, 2010). However, constructionist learning environments, ideas, and findings have yet to reach the majority of classrooms and have had incomplete influence in the broader education research community. There are several potential reasons for this. One of them may be a lack of demonstration that findings are generalizable across populations and across specific content. Another reason is that constructionist activities are seen to be timeconsuming for teachers (Warschauer & Matuchniak, 2010), though, in practice, it has been shown that supporting understanding through project-based work could actually save time (Fosnot, 2005) and enable classroom dynamics that may streamline class preparation (e.g., peer teaching or peer feedback). A last reason is that constructionists almost universally value more deep understanding of scientific principles than facts or procedural skills even in contexts (e.g., many classrooms) in which memorization of facts and procedural skills is the target to be evaluated (Abelson & diSessa, 1986; Papert & Harel, 1991). Therefore, much of what is learned in constructionist environments does not directly translate to test scores or other established metrics. Constructionist research can be useful and convincing to audiences that do not yet take full advantage of the scientific findings of this community, but it requires careful consideration of framing and evidence to reach them. Educational data mining methods pose the potential to both enhance constructionist research, and to support constructionist researchers in communicating their findings in a fashion that other researchers consider valid. Blikstein (2011, p. 110) made ",
"title": ""
},
{
"docid": "09da7573fbad0b501eb1e834c413a4aa",
"text": "We present XGSN, an open-source system that relies on semantic representations of sensor metadata and observations, to guide the process of annotating and publishing sensor data on the Web. XGSN is able to handle the data acquisition process of a wide number of devices and protocols, and is designed as a highly extensible platform, leveraging on the existing capabilities of the Global Sensor Networks (GSN) middleware. Going beyond traditional sensor management systems, XGSN is capable of enriching virtual sensor descriptions with semantically annotated content using standard vocabularies. In the proposed approach, sensor data and observations are annotated using an ontology network based on the SSN ontology, providing a standardized queryable representation that makes it easier to share, discover, integrate and interpret the data. XGSN manages the annotation process for the incoming sensor observations, producing RDF streams that are sent to the cloud-enabled Linked Sensor Middleware, which can internally store the data or perform continuous query processing. The distributed nature of XGSN allows deploying different remote instances that can interchange observation data, so that virtual sensors can be aggregated and consume data from other remote virtual sensors. In this paper we show how this approach has been implemented in XGSN, and incorporated to the wider OpenIoT platform, providing a highly flexible and scalable system for managing the life-cycle of sensor data, from acquisition to publishing, in the context of the semantic Web of Things.",
"title": ""
},
{
"docid": "32b8087a31a588b03d5b6f4a100e6308",
"text": "This paper conceptually examines how and why projects and project teams may be conceived as highly generative episodic individual and team learning places that can serve as vehicles or agents to promote organizational learning. It draws on and dissects a broad and relevant literature concerning situated learning, organizational learning, learning spaces and project management. The arguments presented signal a movement towards a project workplace becoming more organizationally acknowledged and supported as a learning intense entity wherein, learning is a more conspicuous, deliberate and systematic social activity by project participants. This paper challenges conventional and limited organizational perceptions about project teams and their practices and discloses their extended value contributions to organizational learning development. © 2011 Elsevier Ltd. and IPMA. All rights reserved.",
"title": ""
},
{
"docid": "f5519eff0c13e0ee42245fdf2627b8ae",
"text": "An efficient vehicle tracking system is designed and implemented for tracking the movement of any equipped vehicle from any location at any time. The proposed system made good use of a popular technology that combines a Smartphone application with a microcontroller. This will be easy to make and inexpensive compared to others. The designed in-vehicle device works using Global Positioning System (GPS) and Global system for mobile communication / General Packet Radio Service (GSM/GPRS) technology that is one of the most common ways for vehicle tracking. The device is embedded inside a vehicle whose position is to be determined and tracked in real-time. A microcontroller is used to control the GPS and GSM/GPRS modules. The vehicle tracking system uses the GPS module to get geographic coordinates at regular time intervals. The GSM/GPRS module is used to transmit and update the vehicle location to a database. A Smartphone application is also developed for continuously monitoring the vehicle location. The Google Maps API is used to display the vehicle on the map in the Smartphone application. Thus, users will be able to continuously monitor a moving vehicle on demand using the Smartphone application and determine the estimated distance and time for the vehicle to arrive at a given destination. In order to show the feasibility and effectiveness of the system, this paper presents experimental results of the vehicle tracking system and some experiences on practical implementations.",
"title": ""
},
{
"docid": "ba9d274247f3f3da9274be52fa8a7096",
"text": "Dysregulated growth hormone (GH) hypersecretion is usually caused by a GH-secreting pituitary adenoma and leads to acromegaly - a disorder of disproportionate skeletal, tissue, and organ growth. High GH and IGF1 levels lead to comorbidities including arthritis, facial changes, prognathism, and glucose intolerance. If the condition is untreated, enhanced mortality due to cardiovascular, cerebrovascular, and pulmonary dysfunction is associated with a 30% decrease in life span. This Review discusses acromegaly pathogenesis and management options. The latter include surgery, radiation, and use of novel medications. Somatostatin receptor (SSTR) ligands inhibit GH release, control tumor growth, and attenuate peripheral GH action, while GH receptor antagonists block GH action and effectively lower IGF1 levels. Novel peptides, including SSTR ligands, exhibiting polyreceptor subtype affinities and chimeric dopaminergic-somatostatinergic properties are currently in clinical trials. Effective control of GH and IGF1 hypersecretion and ablation or stabilization of the pituitary tumor mass lead to improved comorbidities and lowering of mortality rates for this hormonal disorder.",
"title": ""
},
{
"docid": "dd84b653de8b3b464c904a988a622a39",
"text": "We demonstrate that for sentence-level relation extraction it is beneficial to consider other relations in the sentential context while predicting the target relation. Our architecture uses an LSTM-based encoder to jointly learn representations for all relations in a single sentence. We combine the context representations with an attention mechanism to make the final prediction. We use the Wikidata knowledge base to construct a dataset of multiple relations per sentence and to evaluate our approach. Compared to a baseline system, our method results in an average error reduction of 24% on a held-out set of relations. The code and the dataset to replicate the experiments are made available at https://github.com/ukplab.",
"title": ""
},
{
"docid": "e6298cd08f89d3cb8a6f8a78c2f4ae49",
"text": "We present a fast pattern matching algorithm with a large set of templates. The algorithm is based on the typical template matching speeded up by the dual decomposition; the Fourier transform and the Karhunen-Loeve transform. The proposed algorithm is appropriate for the search of an object with unknown distortion within a short period. Patterns with different distortion differ slightly from each other and are highly correlated. The image vector subspace required for effective representation can be defined by a small number of eigenvectors derived by the Karhunen-Loeve transform. A vector subspace spanned by the eigenvectors is generated, and any image vector in the subspace is considered as a pattern to be recognized. The pattern matching of objects with unknown distortion is formulated as the process to extract the portion of the input image, find the pattern most similar to the extracted portion in the subspace, compute normalized correlation between them at each location in the input image, and find the location with the best score. Searching for objects with unknown distortion requires vast computation. The formulation above makes it possible to decompose highly correlated reference images into eigenvectors, as well as to decompose images in frequency domain, and to speed up the process significantly. Index Terms —Template matching, pattern matching, Karhunen-Loeve transform, Fourier transform, eigenvector. —————————— ✦ ——————————",
"title": ""
},
{
"docid": "e31ea6b8c4a5df049782b463abc602ea",
"text": "Nature plays a very important role to solve problems in a very effective and well-organized way. Few researchers are trying to create computational methods that can assist human to solve difficult problems. Nature inspired techniques like swarm intelligence, bio-inspired, physics/chemistry and many more have helped in solving difficult problems and also provide most favourable solution. Nature inspired techniques are wellmatched for soft computing application because parallel, dynamic and self organising behaviour. These algorithms motivated from the working group of social agents like ants, bees and insect. This paper is a complete survey of nature inspired techniques.",
"title": ""
}
] |
scidocsrr
|
64206b98b6c86e3bf83dcd85bd3522ce
|
SenticNet 4: A Semantic Resource for Sentiment Analysis Based on Conceptual Primitives
|
[
{
"docid": "7f74c519207e469c39f81d52f39438a0",
"text": "Automatic sentiment classification has been extensively studied and applied in recent years. However, sentiment is expressed differently in different domains, and annotating corpora for every possible domain of interest is impractical. We investigate domain adaptation for sentiment classifiers, focusing on online reviews for different types of products. First, we extend to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30% over the original SCL algorithm and 46% over a supervised baseline. Second, we identify a measure of domain similarity that correlates well with the potential for adaptation of a classifier from one domain to another. This measure could for instance be used to select a small set of domains to annotate whose trained classifiers would transfer well to many other domains.",
"title": ""
},
{
"docid": "742c0b15f6a466bfb4e5130b49f79e64",
"text": "There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks (DBNs); however, scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model that scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique that shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.",
"title": ""
}
] |
[
{
"docid": "4a2fcdf5394e220a579d1414588a124a",
"text": "In this paper we introduce AR Scratch, the first augmented-reality (AR) authoring environment designed for children. By adding augmented-reality functionality to the Scratch programming platform, this environment allows pre-teens to create programs that mix real and virtual spaces. Children can display virtual objects on a real-world space seen through a camera, and they can control the virtual world through interactions between physical objects. This paper describes the system design process, which focused on appropriately presenting the AR technology to the typical Scratch population (children aged 8-12), as influenced by knowledge of child spatial cognition, programming expertise, and interaction metaphors. Evaluation of this environment is proposed, accompanied by results from an initial pilot study, as well as discussion of foreseeable impacts on the Scratch user community.",
"title": ""
},
{
"docid": "0e48de6dc8d1f51eb2a7844d4d67b8f5",
"text": "Vygotsky asserted that the student who had mastered algebra had attained “a new higher plane of thought”, a level of abstraction and generalization which transformed the meaning of the lower (arithmetic) level. He also affirmed the importance of the mastery of scientific concepts for the development of the ability to think theoretically, and emphasized the mediating role of semiotic forms and symbol systems in developing this ability. Although historically in mathematics and traditionally in education, algebra followed arithmetic, Vygotskian theory supports the reversal of this sequence in the service of orienting children to the most abstract and general level of understanding initially. This organization of learning activity for the development of algebraic thinking is very different from the introduction of elements of algebra into the study of arithmetic in the early grades. The intended theoretical (algebraic) understanding is attained through appropriation of psychological tools, in the form of specially designed schematics, whose mastery is not merely incidental to but the explicit focus of instruction. The author’s research in implementing Davydov’s Vygotskian-based elementary mathematics curriculum in the U.S. suggests that these characteristics function synergistically to develop algebraic understanding and computational competence as well. Kurzreferat: Vygotsky ging davon aus, dass Lernende, denen es gelingt, Algebra zu beherrschen, „ein höheres gedankliches Niveau” erreicht hätten, eine Ebene von Abstraktion und Generalisierung, welche die Bedeutung der niederen (arithmetischen) Ebene verändert. Er bestätigte auch die Relevanz der Beherrschung von wissenschaftlichen Begriffen für die Entwicklung der Fähigkeit, theoretisch zu denken und betonte dabei die vermittelnde Rolle von semiotischen Formen und Symbolsystemen für die Ausformung dieser Fähigkeit. Obwohl mathematik-his tor isch und t radi t ionel l erziehungswissenschaftlich betrachtet, Algebra der Arithmetik folgte, stützt Vygotski’s Theorie die Umkehrung dieser Sequenz bei dem Bemühen, Kinder an das abstrakteste und allgemeinste Niveau des ersten Verstehens heranzuführen. Diese Organisation von Lernaktivitäten für die Ausbildung algebraischen Denkens unterscheidet sich erheblich von der Einführung von Algebra-Elementen in das Lernen von Arithmetik während der ersten Schuljahre. Das beabsichtigte theoretische (algebraische) Verstehen wird erreicht durch die Aneignung psychologischer Mittel, und zwar in Form von dafür speziell entwickelten Schemata, deren Beherrschung nicht nur beiläufig erfolgt, sondern Schwerpunkt des Unterrichts ist. Die im Beitrag beschriebenen Forschungen zur Implementierung von Davydov’s elementarmathematischen Curriculum in den Vereinigten Staaten, das auf Vygotsky basiert, legt die Vermutung nahe, dass diese Charakteristika bei der Entwicklung von algebraischem Verstehen und von Rechenkompetenzen synergetisch funktionieren. ZDM-Classification: C30, D30, H20 l. Historical Context Russian psychologist Lev Vygotsky stated clearly his perspective on algebraic thinking. Commenting on its development within the structure of the Russian curriculum in the early decades of the twentieth century,",
"title": ""
},
{
"docid": "764b13c0c5c8134edad4fac65af356d6",
"text": "This thesis introduces new methods for statistically modelling text using topic models. Topic models have seen many successes in recent years, and are used in a variety of applications, including analysis of news articles, topic-based search interfaces and navigation tools for digital libraries. Despite these recent successes, the field of topic modelling is still relatively new and there remains much to be explored. One noticeable absence from most of the previous work on topic modelling is consideration of language and document structure—from low-level structures, including word order and syntax, to higher-level structures, such as relationships between documents. The focus of this thesis is therefore structured topic models—models that combine latent topics with information about document structure, ranging from local sentence structure to inter-document relationships. These models draw on techniques from Bayesian statistics, including hierarchical Dirichlet distributions and processes, Pitman-Yor processes, and Markov chain Monte Carlo methods. Several methods for estimating the parameters of Dirichlet-multinomial distributions are also compared. The main contribution of this thesis is the introduction of three structured topic models. The first is a topic-based language model. This model captures both word order and latent topics by extending a Bayesian topic model to incorporate n-gram statistics. A bigram version of the new model does better at predicting future words than either a topic model or a trigram language model. It also provides interpretable topics. The second model arises from a Bayesian reinterpretation of a classic generative dependency parsing model. The new model demonstrates that parsing performance can be substantially improved by a careful choice of prior and by sampling hyperparameters. Additionally, the generative nature of the model facilitates the inclusion of latent state variables, which act as specialised part-of-speech tags or “syntactic topics”. The third is a model that captures high-level relationships between documents. This model uses nonparametric Bayesian priors and Markov chain Monte Carlo methods to infer topic-based document clusters. The model assigns a higher probability to unseen test documents than either a clustering model without topics or a Bayesian topic model without document clusters. The model can be extended to incorporate author information, resulting in finer-grained clusters and better predictive performance.",
"title": ""
},
{
"docid": "0358eea62c126243134ed1cd2ac97121",
"text": "In the absence of vision, grasping an object often relies on tactile feedback from the ngertips. As the nger pushes the object, the ngertip can feel the contact point move. If the object is known in advance, from this motion the nger may infer the location of the contact point on the object and thereby the object pose. This paper primarily investigates the problem of determining the pose (orientation and position) and motion (velocity and angular velocity) of a planar object with known geometry from such contact motion generated by pushing. A dynamic analysis of pushing yields a nonlinear system that relates through contact the object pose and motion to the nger motion. The contact motion on the ngertip thus encodes certain information about the object pose. Nonlinear observability theory is employed to show that such information is su cient for the nger to \\observe\" not only the pose but also the motion of the object. Therefore a sensing strategy can be realized as an observer of the nonlinear dynamical system. Two observers are subsequently introduced. The rst observer, based on the result of [15], has its \\gain\" determined by the solution of a Lyapunov-like equation; it can be activated at any time instant during a push. The second observer, based on Newton's method, solves for the initial (motionless) object pose from three intermediate contact points during a push. Under the Coulomb friction model, the paper copes with support friction in the plane and/or contact friction between the nger and the object. Extensive simulations have been done to demonstrate the feasibility of the two observers. Preliminary experiments (with an Adept robot) have also been conducted. A contact sensor has been implemented using strain gauges. Accepted by the International Journal of Robotics Research.",
"title": ""
},
{
"docid": "35d220680e18898d298809272619b1d6",
"text": "This paper proposes the use of a least mean fourth (LMF)-based algorithm for single-stage three-phase grid-integrated solar photovoltaic (SPV) system. It consists of an SPV array, voltage source converter (VSC), three-phase grid, and linear/nonlinear loads. This system has an SPV array coupled with a VSC to provide three-phase active power and also acts as a static compensator for the reactive power compensation. It also conforms to an IEEE-519 standard on harmonics by improving the quality of power in the three-phase distribution network. Therefore, this system serves to provide harmonics alleviation, load balancing, power factor correction and regulating the terminal voltage at the point of common coupling. In order to increase the efficiency and maximum power to be extracted from the SPV array at varying environmental conditions, a single-stage system is used along with perturb and observe method of maximum power point tracking (MPPT) integrated with the LMF-based control technique. The proposed system is modeled and simulated using MATLAB/Simulink with available simpower system toolbox and the behaviour of the system under different loads and environmental conditions are verified experimentally on a developed system in the laboratory.",
"title": ""
},
{
"docid": "06fdd2dae0aa83ec3697342d831da39f",
"text": "Traditionally, nostalgia has been conceptualized as a medical disease and a psychiatric disorder. Instead, we argue that nostalgia is a predominantly positive, self-relevant, and social emotion serving key psychological functions. Nostalgic narratives reflect more positive than negative affect, feature the self as the protagonist, and are embedded in a social context. Nostalgia is triggered by dysphoric states such as negative mood and loneliness. Finally, nostalgia generates positive affect, increases selfesteem, fosters social connectedness, and alleviates existential threat. KEYWORDS—nostalgia; positive affect; self-esteem; social connectedness; existential meaning The term nostalgia was inadvertedly inspired by history’s most famous itinerant. Emerging victoriously from the Trojan War, Odysseus set sail for his native island of Ithaca to reunite with his faithful wife, Penelope. For 3 years, our wandering hero fought monsters, assorted evildoers, and mischievous gods. For another 7 years, he took respite in the arms of the beautiful sea nymph Calypso. Possessively, she offered to make him immortal if he stayed with her on the island of Ogygia. ‘‘Full well I acknowledge,’’ Odysseus replied to his mistress, ‘‘prudent Penelope cannot compare with your stature or beauty, for she is only a mortal, and you are immortal and ageless. Nevertheless, it is she whom I daily desire and pine for. Therefore I long for my home and to see the day of returning’’ (Homer, 1921, Book V, pp. 78–79). This romantic declaration, along with other expressions of Odyssean longing in the eponymous Homeric epic, gave rise to the term nostalgia. It is a compound word, consisting of nostos (return) and algos (pain). Nostalgia, then, is literally the suffering due to relentless yearning for the homeland. The term nostalgia was coined in the 17th century by the Swiss physician Johaness Hofer (1688/1934), but references to the emotion it denotes can be found in Hippocrates, Caesar, and the Bible. HISTORICAL AND MODERN CONCEPTIONS OF NOSTALGIA From the outset, nostalgia was equated with homesickness. It was also considered a bad omen. In the 17th and 18th centuries, speculation about nostalgia was based on observations of Swiss mercenaries in the service of European monarchs. Nostalgia was regarded as a medical disease confined to the Swiss, a view that persisted through most of the 19th century. Symptoms— including bouts of weeping, irregular heartbeat, and anorexia— were attributed variously to demons inhabiting the middle brain, sharp differentiation in atmospheric pressure wreaking havoc in the brain, or the unremitting clanging of cowbells in the Swiss Alps, which damaged the eardrum and brain cells. By the beginning of the 20th century, nostalgia was regarded as a psychiatric disorder. Symptoms included anxiety, sadness, and insomnia. By the mid-20th century, psychodynamic approaches considered nostalgia a subconscious desire to return to an earlier life stage, and it was labeled as a repressive compulsive disorder. Soon thereafter, nostalgia was downgraded to a variant of depression, marked by loss and grief, though still equated with homesickness (for a historical review of nostalgia, see Sedikides, Wildschut, & Baden, 2004). By the late 20th century, there were compelling reasons for nostalgia and homesickness to finally part ways. Adult participants regard nostalgia as different from homesickness. For example, they associate the words warm, old times, childhood, and yearning more frequently with nostalgia than with homesickness (Davis, 1979). Furthermore, whereas homesickness research focused on the psychological problems (e.g., separation anxiety) that can arise when young people transition beyond the home environment, nostalgia transcends social groups and age. For example, nostalgia is found cross-culturally and among wellfunctioning adults, children, and dementia patients (Sedikides et al., 2004; Sedikides, Wildschut, Routledge, & Arndt, 2008; Zhou, Sedikides, Wildschut, & Gao, in press). Finally, although homesickness refers to one’s place of origin, nostalgia can refer Address correspondence to Constantine Sedikides, Center for Research on Self and Identity, School of Psychology, University of Southampton, Southampton SO17 1BJ, England, U.K.; e-mail: [email protected]. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 304 Volume 17—Number 5 Copyright r 2008 Association for Psychological Science to a variety of objects (e.g., persons, events, places; Wildschut, Sedikides, Arndt, & Routledge, 2006). It is in this light that we note the contemporary definition of nostalgia as a sentimental longing for one’s past. It is, moreover, a sentimentality that is pervasively experienced. Over 80% of British undergraduates reported experiencing nostalgia at least once a week (Wildschut et al., 2006). Given this apparent ubiquity, the time has come for an empirical foray into the content, causes, and functions of this emotion. THE EMPIRICAL BASIS FOR UNDERSTANDING NOSTALGIA The Canvas of Nostalgia What is the content of the nostalgic experience? Wildschut et al. (2006) analyzed the content of narratives submitted voluntarily by (American and Canadian) readers to the periodical Nostalgia. Also, Wildschut et al. asked British undergraduates to write a narrative account of a nostalgic experience. These narratives were also analyzed for content. Across both studies, the most frequently listed objects of nostalgic reverie were close others (family members, friends, partners), momentous events (birthdays, vacations), and settings (sunsets, lakes). Nostalgia has been conceptualized variously as a negative, ambivalent, or positive emotion (Sedikides et al., 2004). These conceptualizations were put to test. In a study by Wildschut, Stephan, Sedikides, Routledge, and Arndt (2008), British and American undergraduates wrote narratives about a ‘‘nostalgic event’’ (vs. an ‘‘ordinary event’’) in their lives and reflected briefly upon the event and how it made them feel. Content analysis revealed that the simultaneous expression of happiness and sadness was more common in narratives of nostalgic events than in narratives of ordinary events. Also in Wildschut et al., British undergraduates wrote about a nostalgic (vs. ordinary vs. simply positive) event in their lives and then rated their happiness and sadness. Although the recollection of ordinary and positive events rarely gave rise to both happiness and sadness, such coactivation occurred much more frequently following the recollection of a nostalgic event. Yet, nostalgic events featured more frequent expressions of happiness than of sadness and induced higher levels of happiness than of sadness. Wildschut et al. (2006) obtained additional evidence that nostalgia is mostly a positively toned emotion: The narratives included far more expressions of positive than negative affect. At the same time, though, there was evidence of bittersweetness. Many narratives contained descriptions of disappointments and losses, and some touched on such issues as separation and even the death of loved ones. Nevertheless, positive and negative elements were often juxtaposed to create redemption, a narrative pattern that progresses from a negative or undesirable state (e.g., suffering, pain, exclusion) to a positive or desirable state (e.g., acceptance, euphoria, triumph; McAdams, 2001). For example, although a family reunion started badly (e.g., an uncle insulting the protagonist), it nevertheless ended well (e.g., the family singing together after dinner). The strength of the redemption theme may explain why, despite the descriptions of sorrow, the overall affective signature of the nostalgic narratives was positive. Moreover, Wildschut et al. (2006) showed that nostalgia is a self-relevant and social emotion: The self almost invariably figured as the protagonist in the narratives and was almost always surrounded by close others. In all, the canvas of nostalgia is rich, reflecting themes of selfhood, sociality, loss, redemption, and ambivalent, yet mostly positive, affectivity. The Triggers of Nostalgia Wildschut et al. (2006) asked participants to describe when they become nostalgic. The most frequently reported trigger was negative affect (‘‘I think of nostalgic experiences when I am sad as they often make me feel better’’), and, within this category, loneliness was the most frequently reported discrete affective state (‘‘If I ever feel lonely or sad I tend to think of my friends or family who I haven’t seen in a long time’’). Given these initial reports, Wildschut et al. proceeded to test whether indeed negative mood and loneliness qualify as nostalgia triggers. British undergraduates read one of three news stories, each based on actual events, that were intended to influence their mood. In the negative-mood condition, they read about the Tsunami that struck coastal regions in Asia and Africa in December 2004. In the neutral-mood condition, they read about the January 2005 landing of the Huygens probe on Titan. In the positive-mood condition, they read about the November 2004 birth of a polar bear, ostensibly in the London Zoo (actually in the Detroit Zoo). Then they completed a measure of nostalgia, rating the extent to which they missed 18 aspects of their past (e.g., ‘‘holidays I went on,’’ ‘‘past TV shows, movies,’’ ‘‘someone I loved’’). Participants in the negativemood condition were more nostalgic (i.e., missed more aspects of their past) than were participants in the other two conditions. In another study, loneliness was successfully induced by giving participants false (high vs. low) feedback on a ‘‘loneliness’’ test (i.e., they were led to believe they were either lonely or not lonely based on the feedback). Subsequently, participants rated how much they missed 18 aspects of their past. Participants in the high-loneliness condition were more nostalgic than those in the low-loneliness condition. These findings were re",
"title": ""
},
{
"docid": "371ab18488da4e719eda8838d0d42ba8",
"text": "Research reveals dramatic differences in the ways that people from different cultures perceive the world around them. Individuals from Western cultures tend to focus on that which is object-based, categorically related, or self-relevant whereas people from Eastern cultures tend to focus more on contextual details, similarities, and group-relevant information. These different ways of perceiving the world suggest that culture operates as a lens that directs attention and filters the processing of the environment into memory. The present review describes the behavioral and neural studies exploring the contribution of culture to long-term memory and related processes. By reviewing the extant data on the role of various neural regions in memory and considering unifying frameworks such as a memory specificity approach, we identify some promising directions for future research.",
"title": ""
},
{
"docid": "8e3f8fca93ca3106b83cf85d20c061ca",
"text": "KeeLoq is a 528-round lightweight block cipher which has a 64-bit secret key and a 32-bit block length. The cube attack, proposed by Dinur and Shamir, is a new type of attacking method. In this paper, we investigate the security of KeeLoq against iterative side-channel cube attack which is an enhanced attack scheme. Based on structure of typical block ciphers, we give the model of iterative side-channel cube attack. Using the traditional single-bit leakage model, we assume that the attacker can exactly possess the information of one bit leakage after round 23. The new attack model costs a data complexity of 211.00 chosen plaintexts to recover the 23-bit key of KeeLoq. Our attack will reduce the key searching space to 241 by considering an error-free bit from internal states.",
"title": ""
},
{
"docid": "55507c03c5319de2806c0365accf2980",
"text": "Although latent factor models (e.g., matrix factorization) achieve good accuracy in rating prediction, they suffer from several problems including cold-start, non-transparency, and suboptimal recommendation for local users or items. In this paper, we employ textual review information with ratings to tackle these limitations. Firstly, we apply a proposed aspect-aware topic model (ATM) on the review text to model user preferences and item features from different aspects, and estimate the aspect importance of a user towards an item. The aspect importance is then integrated into a novel aspect-aware latent factor model (ALFM), which learns user’s and item’s latent factors based on ratings. In particular, ALFM introduces a weighted matrix to associate those latent factors with the same set of aspects discovered by ATM, such that the latent factors could be used to estimate aspect ratings. Finally, the overall rating is computed via a linear combination of the aspect ratings, which are weighted by the corresponding aspect importance. To this end, our model could alleviate the data sparsity problem and gain good interpretability for recommendation. Besides, an aspect rating is weighted by an aspect importance, which is dependent on the targeted user’s preferences and targeted item’s features. Therefore, it is expected that the proposed method can model a user’s preferences on an item more accurately for each user-item pair locally. Comprehensive experimental studies have been conducted on 19 datasets from Amazon and Yelp 2017 Challenge dataset. Results show that our method achieves significant improvement compared with strong baseline methods, especially for users with only few ratings. Moreover, our model could interpret the recommendation results in depth.",
"title": ""
},
{
"docid": "a7af0135b2214ca88883fe136bb13e70",
"text": "ITIL is one of the most used frameworks for IT service management. Implementing ITIL processes through an organization is not an easy task and present many difficulties. This paper explores the ITIL implementation's challenges and tries to experiment how Business Process Management Systems can help organization overtake those challenges.",
"title": ""
},
{
"docid": "69c253f895d2f886496332d1b3d22542",
"text": "In this paper, we present a novel refined fused model combining masked Res-C3D network and skeleton LSTM for abnormal gesture recognition in RGB-D videos. The key to our design is to learn discriminative representations of gesture sequences in particular abnormal gesture samples by fusing multiple features from different models. First, deep spatiotemporal features are well extracted by 3D convolutional neural networks with residual architecture (Res-C3D). As gestures are mainly derived from the arm or hand movements, a masked Res-C3D network is built to decrease the effect of background and other variations via exploiting the skeleton of the body to reserve arm regions with discarding other regions. And then, relative positions and angles of different key points are extracted and used to build a time-series model by long short-term memory network (LSTM). Based the above representations, a fusion scheme for blending classification results and remedy model disadvantage by abnormal gesture via a weight fusion layer is developed, in which the weights of each voting sub-classifier being advantage to a certain class in our ensemble model are adaptively obtained by training in place of fixed weights. Our experimental results show that the proposed method can distinguish the abnormal gesture samples effectively and achieve the state-of-the-art performance in the IsoGD dataset.",
"title": ""
},
{
"docid": "d29485bc844995b639bb497fb05fcb6a",
"text": "Vol. LII (June 2015), 375–393 375 © 2015, American Marketing Association ISSN: 0022-2437 (print), 1547-7193 (electronic) *Paul R. Hoban is Assistant Professor of Marketing, Wisconsin School of Business, University of Wisconsin–Madison (e-mail: phoban@ bus. wisc. edu). Randolph E. Bucklin is Professor of Marketing, Peter W. Mullin Chair in Management, UCLA Anderson School of Management, University of California, Los Angeles (e-mail: randy.bucklin@anderson. ucla. edu). Avi Goldfarb served as associate editor for this article. PAUL R. HOBAN and RANDOLPH E. BUCKLIN*",
"title": ""
},
{
"docid": "dc418c7add2456b08bc3a6f15b31da9f",
"text": "In professional search environments, such as patent search or legal search, search tasks have unique characteristics: 1) users interactively issue several queries for a topic, and 2) users are willing to examine many retrieval results, i.e., there is typically an emphasis on recall. Recent surveys have also verified that professional searchers continue to have a strong preference for Boolean queries because they provide a record of what documents were searched. To support this type of professional search, we propose a novel Boolean query suggestion technique. Specifically, we generate Boolean queries by exploiting decision trees learned from pseudo-labeled documents and rank the suggested queries using query quality predictors. We evaluate our algorithm in simulated patent and medical search environments. Compared with a recent effective query generation system, we demonstrate that our technique is effective and general.",
"title": ""
},
{
"docid": "1ca9d06a2afdd63976976a14648bf5be",
"text": "Real-time solutions for noise reduction and signal processing represent a central challenge for the development of Brain Computer Interfaces (BCI). In this paper, we introduce the Moving Average Convergence Divergence (MACD) filter, a tunable digital passband filter for online noise reduction and onset detection without preliminary learning phase, used in economic markets analysis. MACD performance was tested and benchmarked with other filters using data collected with functional Near Infrared Spectoscopy (fNIRS) during a digit sequence memorization task. This filter has a good performance on filtering and real-time peak activity onset detection, compared to other techniques. Therefore, MACD could be implemented for efficient BCI design using fNIRS.",
"title": ""
},
{
"docid": "dcab5c32a037ac31f8a541458a2d72a6",
"text": "To determine the 3D orientation and 3D location of objects in the surroundings of a camera mounted on a robot or mobile device, we developed two powerful algorithms in object detection and temporal tracking that are combined seamlessly for robotic perception and interaction as well as Augmented Reality (AR). A separate evaluation of, respectively, the object detection and the temporal tracker demonstrates the important stride in research as well as the impact on industrial robotic applications and AR. When evaluated on a standard dataset, the detector produced the highest f1score with a large margin while the tracker generated the best accuracy at a very low latency of approximately 2 ms per frame with one CPU core – both algorithms outperforming the state of the art. When combined, we achieve a powerful framework that is robust to handle multiple instances of the same object under occlusion and clutter while attaining real-time performance. Aiming at stepping beyond the simple scenarios used by current systems, often constrained by having a single object in absence of clutter, averting to touch the object to prevent close-range partial occlusion, selecting brightly colored objects to easily segment them individually or assuming that the object has simple geometric structure, we demonstrate the capacity to handle challenging cases under clutter, partial occlusion and varying lighting conditions with objects of different shapes and sizes.",
"title": ""
},
{
"docid": "aeb4af864a4e2435486a69f5694659dc",
"text": "A great amount of research has been developed around the early cognitive impairments that best predict the onset of Alzheimer's disease (AD). Given that mild cognitive impairment (MCI) is no longer considered to be an intermediate state between normal aging and AD, new paths have been traced to acquire further knowledge about this condition and its subtypes, and to determine which of them have a higher risk of conversion to AD. It is now known that other deficits besides episodic and semantic memory impairments may be present in the early stages of AD, such as visuospatial and executive function deficits. Furthermore, recent investigations have proven that the hippocampus and the medial temporal lobe structures are not only involved in memory functioning, but also in visual processes. These early changes in memory, visual, and executive processes may also be detected with the study of eye movement patterns in pathological conditions like MCI and AD. In the present review, we attempt to explore the existing literature concerning these patterns of oculomotor changes and how these changes are related to the early signs of AD. In particular, we argue that deficits in visual short-term memory, specifically in iconic memory, attention processes, and inhibitory control, may be found through the analysis of eye movement patterns, and we discuss how they might help to predict the progression from MCI to AD. We add that the study of eye movement patterns in these conditions, in combination with neuroimaging techniques and appropriate neuropsychological tasks based on rigorous concepts derived from cognitive psychology, may highlight the early presence of cognitive impairments in the course of the disease.",
"title": ""
},
{
"docid": "92a00453bc0c2115a8b37e5acc81f193",
"text": "Choosing the appropriate software development methodology is something which continues to occupy the minds of many IT professionals. The introduction of “Agile” development methodologies such as XP and SCRUM held the promise of improved software quality and reduced delivery times. Combined with a Lean philosophy, there would seem to be potential for much benefit. While evidence does exist to support many of the Lean/Agile claims, we look here at how such methodologies are being adopted in the rigorous environment of safety-critical embedded software development due to its high regulation. Drawing on the results of a systematic literature review we find that evidence is sparse for Lean/Agile adoption in these domains. However, where it has been trialled, “out-of-the-box” Agile practices do not seem to fully suit these environments but rather tailored Agile versions combined with more planbased practices seem to be making inroads.",
"title": ""
},
{
"docid": "3f7c16788bceba51f0cbf0e9c9592556",
"text": "Centralised patient monitoring systems are in huge demand as they not only reduce the labour work and cost but also the time of the clinical hospitals. Earlier wired communication was used but now Zigbee which is a wireless mesh network is preferred as it reduces the cost. Zigbee is also preferred over Bluetooth and infrared wireless communication because it is energy efficient, has low cost and long distance range (several miles). In this paper we proposed wireless transmission of data between a patient and centralised unit using Zigbee module. The paper is divided into two sections. First is patient monitoring system for multiple patients and second is the centralised patient monitoring system. These two systems are communicating using wireless transmission technology i.e. Zigbee. In the first section we have patient monitoring of multiple patients. Each patient's multiple physiological parameters like ECG, temperature, heartbeat are measured at their respective unit. If any physiological parameter value exceeds the threshold value, emergency alarm and LED blinks at each patient unit. This allows a doctor to read various physiological parameters of a patient in real time. The values are displayed on the LCD at each patient unit. Similarly multiple patients multiple physiological parameters are being measured using particular sensors and multiple patient's patient monitoring system is made. In the second section centralised patient monitoring system is made in which all multiple patients multiple parameters are displayed on a central monitor using MATLAB. ECG graph is also displayed on the central monitor using MATLAB software. The central LCD also displays parameters like heartbeat and temperature. The module is less expensive, consumes low power and has good range.",
"title": ""
},
{
"docid": "05696249c57c4b0a52ddfd5598a34f00",
"text": "The quality of word representations is frequently assessed using correlation with human judgements of word similarity. Here, we question whether such intrinsic evaluation can predict the merits of the representations for downstream tasks. We study the correlation between results on ten word similarity benchmarks and tagger performance on three standard sequence labeling tasks using a variety of word vectors induced from an unannotated corpus of 3.8 billion words, and demonstrate that most intrinsic evaluations are poor predictors of downstream performance. We argue that this issue can be traced in part to a failure to distinguish specific similarity from relatedness in intrinsic evaluation datasets. We make our evaluation tools openly available to facilitate further study.",
"title": ""
},
{
"docid": "e9ff17015d40f5c6dd5091557f336f43",
"text": "Web sites that accept and display content such as wiki articles or comments typically filter the content to prevent injected script code from running in browsers that view the site. The diversity of browser rendering algorithms and the desire to allow rich content make filtering quite difficult, however, and attacks such as the Samy and Yamanner worms have exploited filtering weaknesses. This paper proposes a simple alternative mechanism for preventing script injection called Browser-Enforced Embedded Policies (BEEP). The idea is that a web site can embed a policy in its pages that specifies which scripts are allowed to run. The browser, which knows exactly when it will run a script, can enforce this policy perfectly. We have added BEEP support to several browsers, and built tools to simplify adding policies to web applications. We found that supporting BEEP in browsers requires only small and localized modifications, modifying web applications requires minimal effort, and enforcing policies is generally lightweight.",
"title": ""
}
] |
scidocsrr
|
bc674a5d6ee37a7ba716400b4af9d722
|
Automatic Argumentative-Zoning Using Word2vec
|
[
{
"docid": "8921cffb633b0ea350b88a57ef0d4437",
"text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
},
{
"docid": "cd89079c74f5bb0218be67bf680b410f",
"text": "This paper illustrates a sentiment analysis approach to extract sentiments associated with polarities of positive or negative for specific subjects from a document, instead of classifying the whole document into positive or negative.The essential issues in sentiment analysis are to identify how sentiments are expressed in texts and whether the expressions indicate positive (favorable) or negative (unfavorable) opinions toward the subject. In order to improve the accuracy of the sentiment analysis, it is important to properly identify the semantic relationships between the sentiment expressions and the subject. By applying semantic analysis with a syntactic parser and sentiment lexicon, our prototype system achieved high precision (75-95%, depending on the data) in finding sentiments within Web pages and news articles.",
"title": ""
},
{
"docid": "80b173cf8dbd0bc31ba8789298bab0fa",
"text": "This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic Analysis.",
"title": ""
}
] |
[
{
"docid": "d23649c81665bc76134c09b7d84382d0",
"text": "This paper demonstrates the advantages of using controlled mobility in wireless sensor networks (WSNs) for increasing their lifetime, i.e., the period of time the network is able to provide its intended functionalities. More specifically, for WSNs that comprise a large number of statically placed sensor nodes transmitting data to a collection point (the sink), we show that by controlling the sink movements we can obtain remarkable lifetime improvements. In order to determine sink movements, we first define a Mixed Integer Linear Programming (MILP) analytical model whose solution determines those sink routes that maximize network lifetime. Our contribution expands further by defining the first heuristics for controlled sink movements that are fully distributed and localized. Our Greedy Maximum Residual Energy (GMRE) heuristic moves the sink from its current location to a new site as if drawn toward the area where nodes have the highest residual energy. We also introduce a simple distributed mobility scheme (Random Movement or S. Basagni ( ) Department of Electrical and Computer Engineering, Northeastern University e-mail: [email protected] A. Carosi · C. Petrioli Dipartimento di Informatica, Università di Roma “La Sapienza” e-mail: [email protected] C. Petrioli e-mail: [email protected] E. Melachrinoudis · Z. M. Wang Department of Mechanical and Industrial Engineering, Northeastern University e-mail: [email protected] Z. M. Wang e-mail: [email protected] RM) according to which the sink moves uncontrolled and randomly throughout the network. The different mobility schemes are compared through extensive ns2-based simulations in networks with different nodes deployment, data routing protocols, and constraints on the sink movements. In all considered scenarios, we observe that moving the sink always increases network lifetime. In particular, our experiments show that controlling the mobility of the sink leads to remarkable improvements, which are as high as sixfold compared to having the sink statically (and optimally) placed, and as high as twofold compared to uncontrolled mobility.",
"title": ""
},
{
"docid": "4c12c08d72960b3b75662e9459e23079",
"text": "Graph structures play a critical role in computer vision, but they are inconvenient to use in pattern recognition tasks because of their combinatorial nature and the consequent difficulty in constructing feature vectors. Spectral representations have been used for this task which are based on the eigensystem of the graph Laplacian matrix. However, graphs of different sizes produce eigensystems of different sizes where not all eigenmodes are present in both graphs. We use the Levenshtein distance to compare spectral representations under graph edit operations which add or delete vertices. The spectral representations are therefore of different sizes. We use the concept of the string-edit distance to allow for the missing eigenmodes and compare the correct modes to each other. We evaluate the method by first using generated graphs to compare the effect of vertex deletion operations. We then examine the performance of the method on graphs from a shape database.",
"title": ""
},
{
"docid": "81cf3581955988c71b58e7a097ea00bd",
"text": "Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertex coloring problems here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a unifying framework for the graph models of the variant matrix estimation problems. The framework is based upon the viewpoint that a partition of a matrix into structurally orthogonal groups of columns corresponds to distance-2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our claims. Most of the methods for these problems treat a column or a row of a matrix as an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.",
"title": ""
},
{
"docid": "8096886eff1b288561cbe75302e8c578",
"text": "In this paper, we develop a framework to classify supply chain risk management problems and approaches for the solution of these problems. We argue that risk management problems need to be handled at three levels strategic, operational and tactical. In addition, risk within the supply chain might manifest itself in the form of deviations, disruptions and disasters. To handle unforeseen events in the supply chain there are two obvious approaches: (1) to design chains with built in risk-tolerance and (2) to contain the damage once the undesirable event has occurred. Both of these approaches require a clear understanding of undesirable events that may take place in the supply chain and also the associated consequences and impacts from these events. We can then focus our efforts on mapping out the propagation of events in the supply chain due to supplier non-performance, and employ our insight to develop two mathematical programming based preventive models for strategic level deviation and disruption management. The first model, a simple integer quadratic optimization model, adapted from the Markowitz model, determines optimal partner selection with the objective of minimizing both the operational cost and the variability of total operational cost. The second model, a simple mixed integer programming optimization model, adapted from the credit risk minimization model, determines optimal partner selection such that the supply shortfall is minimized even in the face of supplier disruptions. Hence, both of these models offer possible approaches to robust supply chain design.",
"title": ""
},
{
"docid": "8593882a00d738151c8cba1a99e94898",
"text": "Multimodality image registration plays a crucial role in various clinical and research applications. The aim of this study is to present an optimized MR to CT whole-body deformable image registration algorithm and its validation using clinical studies. A 3D intermodality registration technique based on B-spline transformation was performed using optimized parameters of the elastix package based on the Insight Toolkit (ITK) framework. Twenty-eight (17 male and 11 female) clinical studies were used in this work. The registration was evaluated using anatomical landmarks and segmented organs. In addition to 16 anatomical landmarks, three key organs (brain, lungs, and kidneys) and the entire body volume were segmented for evaluation. Several parameters--such as the Euclidean distance between anatomical landmarks, target overlap, Dice and Jaccard coefficients, false positives and false negatives, volume similarity, distance error, and Hausdorff distance--were calculated to quantify the quality of the registration algorithm. Dice coefficients for the majority of patients (> 75%) were in the 0.8-1 range for the whole body, brain, and lungs, which satisfies the criteria to achieve excellent alignment. On the other hand, for kidneys, Dice coefficients for volumes of 25% of the patients meet excellent volume agreement requirement, while the majority of patients satisfy good agreement criteria (> 0.6). For all patients, the distance error was in 0-10 mm range for all segmented organs. In summary, we optimized and evaluated the accuracy of an MR to CT deformable registration algorithm. The registered images constitute a useful 3D whole-body MR-CT atlas suitable for the development and evaluation of novel MR-guided attenuation correction procedures on hybrid PET-MR systems.",
"title": ""
},
{
"docid": "c3fb97edabf2c4fa68cf45bb888e5883",
"text": "Multi-armed bandit problems are the predominant theoretical model of exploration-exploitation tradeoffs in learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising. In many of these application domains, the learner may be constrained by one or more supply (or budget) limits, in addition to the customary limitation on the time horizon. The literature lacks a general model encompassing these sorts of problems. We introduce such a model, called bandits with knapsacks, that combines bandit learning with aspects of stochastic integer programming. In particular, a bandit algorithm needs to solve a stochastic version of the well-known knapsack problem, which is concerned with packing items into a limited-size knapsack. A distinctive feature of our problem, in comparison to the existing regret-minimization literature, is that the optimal policy for a given latent distribution may significantly outperform the policy that plays the optimal fixed arm. Consequently, achieving sublinear regret in the bandits-with-knapsacks problem is significantly more challenging than in conventional bandit problems.\n We present two algorithms whose reward is close to the information-theoretic optimum: one is based on a novel “balanced exploration” paradigm, while the other is a primal-dual algorithm that uses multiplicative updates. Further, we prove that the regret achieved by both algorithms is optimal up to polylogarithmic factors. We illustrate the generality of the problem by presenting applications in a number of different domains, including electronic commerce, routing, and scheduling. As one example of a concrete application, we consider the problem of dynamic posted pricing with limited supply and obtain the first algorithm whose regret, with respect to the optimal dynamic policy, is sublinear in the supply.",
"title": ""
},
{
"docid": "e743bfe8c4f19f1f9a233106919c99a7",
"text": "We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.",
"title": ""
},
{
"docid": "a14a9e61d9a13041d095e3db05b0900c",
"text": "Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). However, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. In particular, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or even misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. Here, we introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.",
"title": ""
},
{
"docid": "636b0dd2a23a87f91b2820d70d687a37",
"text": "KNOWLEDGE is neither data nor information, though it is related to both, and the differences between these terms are often a matter of degree. We start with those more familiar terms both because they are more familiar and because we can understand knowledge best with reference to them. Confusion about what data, information, and knowledge are -how they differ, what those words mean -has resulted in enormous expenditures on technology initiatives that rarely deliver what the firms spending the money needed or thought they were getting. Often firms don't understand what they need until they invest heavily in a system that fails to provide it.",
"title": ""
},
{
"docid": "d22c8390e6ea9ea8c7a84e188cd10ba5",
"text": "BACKGROUND\nNutrition interventions targeted to individuals are unlikely to significantly shift US dietary patterns as a whole. Environmental and policy interventions are more promising for shifting these patterns. We review interventions that influenced the environment through food availability, access, pricing, or information at the point-of-purchase in worksites, universities, grocery stores, and restaurants.\n\n\nMETHODS\nThirty-eight nutrition environmental intervention studies in adult populations, published between 1970 and June 2003, were reviewed and evaluated on quality of intervention design, methods, and description (e.g., sample size, randomization). No policy interventions that met inclusion criteria were found.\n\n\nRESULTS\nMany interventions were not thoroughly evaluated or lacked important evaluation information. Direct comparison of studies across settings was not possible, but available data suggest that worksite and university interventions have the most potential for success. Interventions in grocery stores appear to be the least effective. The dual concerns of health and taste of foods promoted were rarely considered. Sustainability of environmental change was never addressed.\n\n\nCONCLUSIONS\nInterventions in \"limited access\" sites (i.e., where few other choices were available) had the greatest effect on food choices. Research is needed using consistent methods, better assessment tools, and longer durations; targeting diverse populations; and examining sustainability. Future interventions should influence access and availability, policies, and macroenvironments.",
"title": ""
},
{
"docid": "da18fa8e30c58f6b0039d8b1dc4b11a0",
"text": "Customer churn prediction is one of the key steps to maximize the value of customers for an enterprise. It is difficult to get satisfactory prediction effect by traditional models constructed on the assumption that the training and test data are subject to the same distribution, because the customers usually come from different districts and may be subject to different distributions in reality. This study proposes a feature-selection-based dynamic transfer ensemble (FSDTE) model that aims to introduce transfer learning theory for utilizing the customer data in both the target and related source domains. The model mainly conducts a two-layer feature selection. In the first layer, an initial feature subset is selected by GMDH-type neural network only in the target domain. In the second layer, several appropriate patterns from the source domain to target training set are selected, and some features with higher mutual information between them and the class variable are combined with the initial subset to construct a new feature subset. The selection in the second layer is repeated several times to generate a series of new feature subsets, and then, we train a base classifier in each one. Finally, a best base classifier is selected dynamically for each test pattern. The experimental results in two customer churn prediction datasets show that FSDTE can achieve better performance compared with the traditional churn prediction strategies, as well as three existing transfer learning strategies.",
"title": ""
},
{
"docid": "96356639b8df06ff61b3a33563b24a8b",
"text": "the objects, such as movie reviews, book reviews, and product reviews etc. Sentiment analysis is the mining the sentiment or opinion words and identification and analysis of the opinion and arguments in the text. In this paper, we proposed an ontology based combination approach to enhance the exits approaches of sentiment classifications and use supervised learning techniques for classifications.",
"title": ""
},
{
"docid": "e30df718ca1981175e888755cce3ce90",
"text": "Human identification at distance by analysis of gait patterns extracted from video has recently become very popular research in biometrics. This paper presents multi-projections based approach to extract gait patterns for human recognition. Binarized silhouette of a motion object is represented by 1-D signals which are the basic image features called the distance vectors. The distance vectors are differences between the bounding box and silhouette, and extracted using four projections to silhouette. Eigenspace transformation is applied to time-varying distance vectors and the statistical distance based supervised pattern classification is then performed in the lower-dimensional eigenspace for human identification. A fusion strategy developed is finally executed to produce final decision. Based on normalized correlation on the distance vectors, gait cycle estimation is also performed to extract the gait cycle. Experimental results on four databases demonstrate that the right person in top two matches 100% of the times for the cases where training and testing sets corresponds to the same walking styles, and in top three-four matches 100% of the times for training and testing sets corresponds to the different walking styles.",
"title": ""
},
{
"docid": "6445e510d1e3806b878ae07288d2578b",
"text": "The functionalization of polymeric substances is of great interest for the development of 15 innovative materials for advanced applications. For many decades, the functionalization of 16 chitosan has been a convenient way to improve its properties with the aim to prepare new 17 materials with specialized characteristics. In the present article, we summarize the latest methods 18 for the modification and derivatization of chitin and chitosan, trying to introduce specific 19 functional groups under experimental conditions, which allow a control over the macromolecular 20 architecture. This is motivated because an understanding of the interdependence between chemical 21 structure and properties is an important condition for proposing innovative materials. New 22 advances in methods and strategies of functionalization such as click chemistry approach, grafting 23 onto copolymerization, coupling with cyclodextrins and reactions in ionic liquids are discussed. 24",
"title": ""
},
{
"docid": "6d00ae440b45ddad03fb04f480c8c78c",
"text": "Collaborative Filtering (CF) is widely applied to personalized recommendation systems. Traditional collaborative filtering techniques make predictions through a user-item matrix of ratings which explicitly presents user preference. With the increasingly growing number of users and items, insufficient rating data still leads to the decreasing predictive accuracy with traditional collaborative filtering approaches. In the real world, however, many different types of user feedback, e.g. review, like or not, votes etc., co-exist in many online content providers. In this paper we integrate rating data with some other new types of user feedback and propose a multi-task matrix factorization model in order for flexibly using multiple data. We use a common user feature space shared across sub-models in this model and thus the model can simultaneously train the corresponding sub-models with every training sample. Our experiments indicate that new types of user feedback really work and show improvements on predictive accuracy compared to state-of-the-art algorithms.",
"title": ""
},
{
"docid": "471471cfc90e7f212dd7bbbee08d714e",
"text": "Every year, a large number of children in the United States enter the foster care system. Many of them are eventually reunited with their biological parents or quickly adopted. A significant number, however, face long-term foster care, and some of these children are eventually adopted by their foster parents. The decision by foster parents to adopt their foster child carries significant economic consequences, including forfeiting foster care payments while also assuming responsibility for medical, legal, and educational expenses, to name a few. Since 1980, U.S. states have begun to offer adoption subsidies to offset some of these expenses, significantly lowering the cost of adopting a child who is in the foster care system. This article presents empirical evidence of the role that these economic incentives play in foster parents’ decision of when, or if, to adopt their foster child. We find that adoption subsidies increase adoptions through two distinct price mechanisms: by lowering the absolute cost of adoption, and by lowering the relative cost of adoption versus long-term foster care.",
"title": ""
},
{
"docid": "647b76de7edbca25accdd65fed64d34e",
"text": "Despite the evidence that social video conveys rich human personality information, research investigating the automatic prediction of personality impressions in vlogging has shown that, amongst the Big-Five traits, automatic nonverbal behavioral cues are useful to predict mainly the Extraversion trait. This finding, also reported in other conversational settings, indicates that personality information may be coded in other behavioral dimensions like the verbal channel, which has been less studied in multimodal interaction research. In this paper, we address the task of predicting personality impressions from vloggers based on what they say in their YouTube videos. First, we use manual transcripts of vlogs and verbal content analysis techniques to understand the ability of verbal content for the prediction of crowdsourced Big-Five personality impressions. Second, we explore the feasibility of a fully-automatic framework in which transcripts are obtained using automatic speech recognition (ASR). Our results show that the analysis of error-free verbal content is useful to predict four of the Big-Five traits, three of them better than using nonverbal cues, and that the errors caused by the ASR system decrease the performance significantly.",
"title": ""
},
{
"docid": "ad558d1f3d5ab563ade2e606464b7ca0",
"text": "Recently, densified small cell deployment with overlay coverage through coexisting heterogeneous networks has emerged as a viable solution for 5G mobile networks. However, this multi-tier architecture along with stringent latency requirements in 5G brings new challenges in security provisioning due to the potential frequent handovers and authentications in 5G small cells and HetNets. In this article, we review related studies and introduce SDN into 5G as a platform to enable efficient authentication hand-over and privacy protection. Our objective is to simplify authentication handover by global management of 5G HetNets through sharing of userdependent security context information among related access points. We demonstrate that SDN-enabled security solutions are highly efficient through its centralized control capability, which is essential for delay-constrained 5G communications.",
"title": ""
},
{
"docid": "3c017a50302e8a09eff32b97474433a1",
"text": "Few concepts embody the goals of artificial intelligence as well as fully autonomous robots. Countless films and stories have been made that focus on a future filled with autonomous agents that complete menial tasks or run errands that humans do not want or are too busy to carry out. One such task is driving automobiles. In this paper, we summarize the work we have done towards a future of fully-autonomous vehicles, specifically coordinating such vehicles safely and efficiently at intersections. We then discuss the implications this work has for other areas of AI, including planning, multiagent learning, and computer vision.",
"title": ""
},
{
"docid": "ba89a62ac2d1b36738e521d4c5664de2",
"text": "Currently, the network traffic control systems are mainly composed of the Internet core and wired/wireless heterogeneous backbone networks. Recently, these packet-switched systems are experiencing an explosive network traffic growth due to the rapid development of communication technologies. The existing network policies are not sophisticated enough to cope with the continually varying network conditions arising from the tremendous traffic growth. Deep learning, with the recent breakthrough in the machine learning/intelligence area, appears to be a viable approach for the network operators to configure and manage their networks in a more intelligent and autonomous fashion. While deep learning has received a significant research attention in a number of other domains such as computer vision, speech recognition, robotics, and so forth, its applications in network traffic control systems are relatively recent and garnered rather little attention. In this paper, we address this point and indicate the necessity of surveying the scattered works on deep learning applications for various network traffic control aspects. In this vein, we provide an overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems. Also, we discuss the deep learning enablers for network systems. In addition, we discuss, in detail, a new use case, i.e., deep learning based intelligent routing. We demonstrate the effectiveness of the deep learning-based routing approach in contrast with the conventional routing strategy. Furthermore, we discuss a number of open research issues, which researchers may find useful in the future.",
"title": ""
}
] |
scidocsrr
|
51daa90398d59d92015166b7fbbfd226
|
Data-driven advice for applying machine learning to bioinformatics problems
|
[
{
"docid": "40f21a8702b9a0319410b716bda0a11e",
"text": "A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance. An important aspect of our study is the use of a variety of performance criteria to evaluate the learning methods.",
"title": ""
},
{
"docid": "71b5c8679979cccfe9cad229d4b7a952",
"text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"title": ""
}
] |
[
{
"docid": "27bed0efd42918f783e16ca0cf0b8c4a",
"text": "This report documents the program and the outcomes of Dagstuhl Seminar 17301 “User-Generated Content in Social Media”. Social media have a profound impact on individuals, businesses, and society. As users post vast amounts of text and multimedia content every minute, the analysis of this user generated content (UGC) can offer insights to individual and societal concerns and could be beneficial to a wide range of applications. In this seminar, we brought together researchers from different subfields of computer science, such as information retrieval, multimedia, natural language processing, machine learning and social media analytics. We discussed the specific properties of UGC, the general research tasks currently operating on this type of content, identifying their limitations, and imagining new types of applications. We formed two working groups, WG1 “Fake News and Credibility”, WG2 “Summarizing and Story Telling from UGC”. WG1 invented an “Information Nutrition Label” that characterizes a document by different features such as e.g. emotion, opinion, controversy, and topicality; For computing these feature values, available methods and open research issues were identified. WG2 developed a framework for summarizing heterogeneous, multilingual and multimodal data, discussed key challenges and applications of this framework. Seminar July 23–28, 2017 – http://www.dagstuhl.de/17301 1998 ACM Subject Classification H Information Systems, H.5 Information Interfaces and Presentation, H.5.1 Multimedia Information Systems, H.3 Information Storage and Retrieval, H.1 Models and principles, I Computing methodologies, I.2 Artificial Intelligence, I.2.6 Learning, I.2.7 Natural language processing, J Computer Applications, J.4 Social and behavioural sciences, K Computing Milieux, K.4 Computers and Society, K.4.1 Public policy issues",
"title": ""
},
{
"docid": "69bb10420be07fe9fb0fd372c606d04e",
"text": "Contextual text mining is concerned with extracting topical themes from a text collection with context information (e.g., time and location) and comparing/analyzing the variations of themes over different contexts. Since the topics covered in a document are usually related to the context of the document, analyzing topical themes within context can potentially reveal many interesting theme patterns. In this paper, we generalize some of these models proposed in the previous work and we propose a new general probabilistic model for contextual text mining that can cover several existing models as special cases. Specifically, we extend the probabilistic latent semantic analysis (PLSA) model by introducing context variables to model the context of a document. The proposed mixture model, called contextual probabilistic latent semantic analysis (CPLSA) model, can be applied to many interesting mining tasks, such as temporal text mining, spatiotemporal text mining, author-topic analysis, and cross-collection comparative analysis. Empirical experiments show that the proposed mixture model can discover themes and their contextual variations effectively.",
"title": ""
},
{
"docid": "52e1acca8a09cec2a97822dc24d0ed7b",
"text": "In this paper virtual teams are defined as living systems and as such made up of people with different needs and characteristics. Groups generally perform better when they are able to establish a high level of group cohesion. According to Druskat and Wolff [2001] this status can be reached by establishing group emotional intelligence. Group emotional intelligence is reached via interactions among members and the interactions are allowed through the disposable linking factors. Virtual linking factors differ from traditional linking factors; therefore, the concept of Virtual Emotional Intelligence is here introduced in order to distinguish the group cohesion reaching process in virtual teams.",
"title": ""
},
{
"docid": "9de00d8cf6b3001f976fa49c42875620",
"text": "This paper is a preliminary report on the efficiency of two strategies of data reduction in a data preprocessing stage. In the first experiment, we apply the Count-Min sketching algorithm, while in the second experiment we discretize our data prior to applying the Count-Min algorithm. By conducting a discretization before sketching, the need for the increased number of buckets in sketching is reduced. This preliminary attempt of combining two methods with the same purpose has shown potential. In our experiments, we use sensor data collected to study the environmental fluctuation and its impact on the quality of fresh peaches and nectarines in cold chain.",
"title": ""
},
{
"docid": "1c1cc9d6b538fda6d2a38ff1dcce7085",
"text": "Major speech production models from speech science literature and a number of popular statistical “generative” models of speech used in speech technology are surveyed. Strengths and weaknesses of these two styles of speech models are analyzed, pointing to the need to integrate the respective strengths while eliminating the respective weaknesses. As an example, a statistical task-dynamic model of speech production is described, motivated by the original deterministic version of the model and targeted for integrated-multilingual speech recognition applications. Methods for model parameter learning (training) and for likelihood computation (recognition) are described based on statistical optimization principles integrated in neural network and dynamic system theories.",
"title": ""
},
{
"docid": "f4bdd6416013dfd2b552efef9c1b22e9",
"text": "ABSTRACT\nTraumatic hemipelvectomy is an uncommon and life threatening injury. We report a case of a 16-year-old boy involved in a traffic accident who presented with an almost circumferential pelvic wound with wide diastasis of the right sacroiliac joint and symphysis pubis. The injury was associated with complete avulsion of external and internal iliac vessels as well as the femoral and sciatic nerves. He also had ipsilateral open comminuted fractures of the femur and tibia. Emergency debridement and completion of amputation with preservation of the posterior gluteal flap and primary anastomosis of the inferior gluteal vessels to the internal iliac artery stump were performed. A free fillet flap was used to close the massive exposed area.\n\n\nKEY WORDS\ntraumatic hemipelvectomy, amputation, and free gluteus maximus fillet flap.",
"title": ""
},
{
"docid": "4e46fb5c1abb3379519b04a84183b055",
"text": "Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience.",
"title": ""
},
{
"docid": "5cd48ee461748d989c40f8e0f0aa9581",
"text": "Being able to identify which rhetorical relations (e.g., contrast or explanation) hold between spans of text is important for many natural language processing applications. Using machine learning to obtain a classifier which can distinguish between different relations typically depends on the availability of manually labelled training data, which is very time-consuming to create. However, rhetorical relations are sometimes lexically marked, i.e., signalled by discourse markers (e.g., because, but, consequently etc.), and it has been suggested (Marcu and Echihabi, 2002) that the presence of these cues in some examples can be exploited to label them automatically with the corresponding relation. The discourse markers are then removed and the automatically labelled data are used to train a classifier to determine relations even when no discourse marker is present (based on other linguistic cues such as word co-occurrences). In this paper, we investigate empirically how feasible this approach is. In particular, we test whether automatically labelled, lexically marked examples are really suitable training material for classifiers that are then applied to unmarked examples. Our results suggest that training on this type of data may not be such a good strategy, as models trained in this way do not seem to generalise very well to unmarked data. Furthermore, we found some evidence that this behaviour is largely independent of the classifiers used and seems to lie in the data itself (e.g., marked and unmarked examples may be too dissimilar linguistically and removing unambiguous markers in the automatic labelling process may lead to a meaning shift in the examples).",
"title": ""
},
{
"docid": "601748e27c7b3eefa4ff29252b42bf93",
"text": "A simple, fast method is presented for the interpolation of texture coordinates and shading parameters for polygons viewed in perspective. The method has application in scan conversion algorithms like z-bu er and painter's algorithms that perform screen space interpolation of shading parameters such as texture coordinates, colors, and normal vectors. Some previous methods perform linear interpolation in screen space, but this is rotationally variant, and in the case of texture mapping, causes a disturbing \\rubber sheet\" e ect. To correctly compute the nonlinear, projective transformation between screen space and parameter space, we use rational linear interpolation across the polygon, performing several divisions at each pixel. We present simpler formulas for setting up these interpolation computations, reducing the setup cost per polygon to nil and reducing the cost per vertex to a handful of divisions. Additional keywords: incremental, perspective, projective, a ne.",
"title": ""
},
{
"docid": "c227f76c42ae34af11193e3ecb224ecb",
"text": "Antibiotics and antibiotic resistance determinants, natural molecules closely related to bacterial physiology and consistent with an ancient origin, are not only present in antibiotic-producing bacteria. Throughput sequencing technologies have revealed an unexpected reservoir of antibiotic resistance in the environment. These data suggest that co-evolution between antibiotic and antibiotic resistance genes has occurred since the beginning of time. This evolutionary race has probably been slow because of highly regulated processes and low antibiotic concentrations. Therefore to understand this global problem, a new variable must be introduced, that the antibiotic resistance is a natural event, inherent to life. However, the industrial production of natural and synthetic antibiotics has dramatically accelerated this race, selecting some of the many resistance genes present in nature and contributing to their diversification. One of the best models available to understand the biological impact of selection and diversification are β-lactamases. They constitute the most widespread mechanism of resistance, at least among pathogenic bacteria, with more than 1000 enzymes identified in the literature. In the last years, there has been growing concern about the description, spread, and diversification of β-lactamases with carbapenemase activity and AmpC-type in plasmids. Phylogenies of these enzymes help the understanding of the evolutionary forces driving their selection. Moreover, understanding the adaptive potential of β-lactamases contribute to exploration the evolutionary antagonists trajectories through the design of more efficient synthetic molecules. In this review, we attempt to analyze the antibiotic resistance problem from intrinsic and environmental resistomes to the adaptive potential of resistance genes and the driving forces involved in their diversification, in order to provide a global perspective of the resistance problem.",
"title": ""
},
{
"docid": "4927fee47112be3d859733c498fbf594",
"text": "To design effective tools for detecting and recovering from software failures requires a deep understanding of software bug characteristics. We study software bug characteristics by sampling 2,060 real world bugs in three large, representative open-source projects—the Linux kernel, Mozilla, and Apache. We manually study these bugs in three dimensions—root causes, impacts, and components. We further study the correlation between categories in different dimensions, and the trend of different types of bugs. The findings include: (1) semantic bugs are the dominant root cause. As software evolves, semantic bugs increase, while memory-related bugs decrease, calling for more research effort to address semantic bugs; (2) the Linux kernel operating system (OS) has more concurrency bugs than its non-OS counterparts, suggesting more effort into detecting concurrency bugs in operating system code; and (3) reported security bugs are increasing, and the majority of them are caused by semantic bugs, suggesting more support to help developers diagnose and fix security bugs, especially semantic security bugs. In addition, to reduce the manual effort in building bug benchmarks for evaluating bug detection and diagnosis tools, we use machine learning techniques to classify 109,014 bugs automatically.",
"title": ""
},
{
"docid": "089ef4e4469554a4d4ef75089fe9c7be",
"text": "The attention of software vendors has moved recently to SMEs (smallto medium-sized enterprises), offering them a vast range of enterprise systems (ES), which were formerly adopted by large firms only. From reviewing information technology innovation adoption literature, it can be argued that IT innovations are highly differentiated technologies for which there is not necessarily a single adoption model. Additionally, the question of why one SME adopts an ES while another does not is still understudied. This study intends to fill this gap by investigating the factors impacting SME adoption of ES. A qualitative approach was adopted in this study involving key decision makers in nine SMEs in the Northwest of England. The contribution of this study is twofold: it provides a framework that can be used as a theoretical basis for studying SME adoption of ES, and it empirically examines the impact of the factors within this framework on SME adoption of ES. The findings of this study confirm that factors impacting the adoption of ES are different from factors impacting SME adoption of other previously studied IT innovations. Contrary to large companies that are mainly affected by organizational factors, this study shows that SMEs are not only affected by environmental factors as previously established, but also affected by technological and organizational factors.",
"title": ""
},
{
"docid": "0bd3beaad8cd6d6f19603eca9320718d",
"text": "For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Vercellis, Carlo. Business intelligence : data mining and optimization for decision making / Carlo Vercellis. p. cm. Includes bibliographical references and index.",
"title": ""
},
{
"docid": "af2ccb9d51cd28426fd4f03e7454d7bf",
"text": "How we categorize certain objects depends on the processes they afford: something is a vehicle because it affords transportation, a house because it offers shelter or a watercourse because water can flow in it. The hypothesis explored here is that image schemas (such as LINK, CONTAINER, SUPPORT, and PATH) capture abstractions that are essential to model affordances and, by implication, categories. To test the idea, I develop an algebraic theory formalizing image schemas and accounting for the role of affordances in categorizing spatial entities.",
"title": ""
},
{
"docid": "ae9219c7e3d85b7b8f83569d000a02bb",
"text": "This paper proposes a bidirectional switched-capacitor dc-dc converter for applications that require high voltage gain. Some of conventional switched-capacitor dc-dc converters have diverse voltage or current stresses for the switching devices in the circuit, not suitable for modular configuration or for high efficiency demand; some suffer from relatively high power loss or large device count for high voltage gain, even if the device voltage stress could be low. By contrast, the proposed dc-dc converter features low component (switching device and capacitor) power rating, small switching device count, and low output capacitance requirement. In addition to its low current stress, the combination of two short symmetric paths of charge pumps further lowers power loss. Therefore, a small and light converter with high voltage gain and high efficiency can be achieved. Simulation and experimental results of a 450-W prototype with a voltage conversion ratio of six validate the principle and features of this topology.",
"title": ""
},
{
"docid": "eb8d681fcfd5b18c15dd09738ab4717c",
"text": "Building a dialogue agent to fulfill complex tasks, such as travel planning, is challenging because the agent has to learn to collectively complete multiple subtasks. For example, the agent needs to reserve a hotel and book a flight so that there leaves enough time for commute between arrival and hotel check-in. This paper addresses this challenge by formulating the task in the mathematical framework of options over Markov Decision Processes (MDPs), and proposing a hierarchical deep reinforcement learning approach to learning a dialogue manager that operates at different temporal scales. The dialogue manager consists of (1) a top-level dialogue policy that selects among subtasks or options, (2) a low-level dialogue policy that selects primitive actions to complete the subtask given by the top-level policy, and (3) a global state tracker that helps ensure all cross-subtask constraints be satisfied. Experiments on a travel planning task with simulated and real users show that our approach leads to significant improvements over two baselines, one based on handcrafted rules and the other based on flat deep reinforcement learning.",
"title": ""
},
{
"docid": "71bc346237c5f97ac245dd7b7bbb497f",
"text": "Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed.",
"title": ""
},
{
"docid": "4f40700ccdc1b6a8a306389f1d7ea107",
"text": "Skin cancer exists in different forms like Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most dangerous and unpredictable. In this paper, we implement an image processing technique for the detection of Melanoma Skin Cancer using the software MATLAB which is easy for implementation as well as detection of Melanoma skin cancer. The input to the system is the skin lesion image. This image proceeds with the image pre-processing methods such as conversion of RGB image to Grayscale image, noise removal and so on. Further Otsu thresholding is used to segment the images followed by feature extraction that includes parameters like Asymmetry, Border Irregularity, Color and Diameter (ABCD) and then Total Dermatoscopy Score (TDS) is calculated. The calculation of TDS determines the presence of Melanoma skin cancer by classifying it as benign, suspicious or highly suspicious skin lesion.",
"title": ""
},
{
"docid": "419f031c3220676ba64c3ec983d4e160",
"text": "Volumetric muscle loss (VML) injuries exceed the considerable intrinsic regenerative capacity of skeletal muscle, resulting in permanent functional and cosmetic deficits. VML and VML-like injuries occur in military and civilian populations, due to trauma and surgery as well as due to a host of congenital and acquired diseases/syndromes. Current therapeutic options are limited, and new approaches are needed for a more complete functional regeneration of muscle. A potential solution is human hair-derived keratin (KN) biomaterials that may have significant potential for regenerative therapy. The goal of these studies was to evaluate the utility of keratin hydrogel formulations as a cell and/or growth factor delivery vehicle for functional muscle regeneration in a surgically created VML injury in the rat tibialis anterior (TA) muscle. VML injuries were treated with KN hydrogels in the absence and presence of skeletal muscle progenitor cells (MPCs), and/or insulin-like growth factor 1 (IGF-1), and/or basic fibroblast growth factor (bFGF). Controls included VML injuries with no repair (NR), and implantation of bladder acellular matrix (BAM, without cells). Initial studies conducted 8 weeks post-VML injury indicated that application of keratin hydrogels with growth factors (KN, KN+IGF-1, KN+bFGF, and KN+IGF-1+bFGF, n = 8 each) enabled a significantly greater functional recovery than NR (n = 7), BAM (n = 8), or the addition of MPCs to the keratin hydrogel (KN+MPC, KN+MPC+IGF-1, KN+MPC+bFGF, and KN+MPC+IGF-1+bFGF, n = 8 each) (p < 0.05). A second series of studies examined functional recovery for as many as 12 weeks post-VML injury after application of keratin hydrogels in the absence of cells. A significant time-dependent increase in functional recovery of the KN, KN+bFGF, and KN+IGF+bFGF groups was observed, relative to NR and BAM implantation, achieving as much as 90% of the maximum possible functional recovery. Histological findings from harvested tissue at 12 weeks post-VML injury documented significant increases in neo-muscle tissue formation in all keratin treatment groups as well as diminished fibrosis, in comparison to both BAM and NR. In conclusion, keratin hydrogel implantation promoted statistically significant and physiologically relevant improvements in functional outcomes post-VML injury to the rodent TA muscle.",
"title": ""
},
{
"docid": "f18a0ae573711eb97b9b4150d53182f3",
"text": "The Electrocardiogram (ECG) is commonly used to detect arrhythmias. Traditionally, a single ECG observation is used for diagnosis, making it difficult to detect irregular arrhythmias. Recent technology developments, however, have made it cost-effective to collect large amounts of raw ECG data over time. This promises to improve diagnosis accuracy, but the large data volume presents new challenges for cardiologists. This paper introduces ECGLens, an interactive system for arrhythmia detection and analysis using large-scale ECG data. Our system integrates an automatic heartbeat classification algorithm based on convolutional neural network, an outlier detection algorithm, and a set of rich interaction techniques. We also introduce A-glyph, a novel glyph designed to improve the readability and comparison of ECG signals. We report results from a comprehensive user study showing that A-glyph improves the efficiency in arrhythmia detection, and demonstrate the effectiveness of ECGLens in arrhythmia detection through two expert interviews.",
"title": ""
}
] |
scidocsrr
|
3ac26b503b33bc09d6d95c2d36e7d9e4
|
Interaction techniques for older adults using touchscreen devices: a literature review
|
[
{
"docid": "708309417183398e86ab537158459a98",
"text": "Despite the demonstrated benefits of bimanual interaction, most tablets use just one hand for interaction, to free the other for support. In a preliminary study, we identified five holds that permit simultaneous support and interaction, and noted that users frequently change position to combat fatigue. We then designed the BiTouch design space, which introduces a support function in the kinematic chain model for interacting with hand-held tablets, and developed BiPad, a toolkit for creating bimanual tablet interaction with the thumb or the fingers of the supporting hand. We ran a controlled experiment to explore how tablet orientation and hand position affect three novel techniques: bimanual taps, gestures and chords. Bimanual taps outperformed our one-handed control condition in both landscape and portrait orientations; bimanual chords and gestures in portrait mode only; and thumbs outperformed fingers, but were more tiring and less stable. Together, BiTouch and BiPad offer new opportunities for designing bimanual interaction on hand-held tablets.",
"title": ""
}
] |
[
{
"docid": "48716199f7865e8cf16fc723b897bb13",
"text": "The current study aimed to review studies on computational thinking (CT) indexed in Web of Science (WOS) and ERIC databases. A thorough search in electronic databases revealed 96 studies on computational thinking which were published between 2006 and 2016. Studies were exposed to a quantitative content analysis through using an article control form developed by the researchers. Studies were summarized under several themes including the research purpose, design, methodology, sampling characteristics, data analysis, and main findings. The findings were reported using descriptive statistics to see the trends. It was observed that there was an increase in the number of CT studies in recent years, and these were mainly conducted in the field of computer sciences. In addition, CT studies were mostly published in journals in the field of Education and Instructional Technologies. Theoretical paradigm and literature review design were preferred more in previous studies. The most commonly used sampling method was the purposive sampling. It was also revealed that samples of previous CT studies were generally pre-college students. Written data collection tools and quantitative analysis were mostly used in reviewed papers. Findings mainly focused on CT skills. Based on current findings, recommendations and implications for further researches were provided.",
"title": ""
},
{
"docid": "277071a4a2dde56c13ca2be8abd4b73d",
"text": "Most state-of-the-art information extraction approaches rely on token-level labels to find the areas of interest in text. Unfortunately, these labels are time-consuming and costly to create, and consequently, not available for many real-life IE tasks. To make matters worse, token-level labels are usually not the desired output, but just an intermediary step. End-to-end (E2E) models, which take raw text as input and produce the desired output directly, need not depend on token-level labels. We propose an E2E model based on pointer networks, which can be trained directly on pairs of raw input and output text. We evaluate our model on the ATIS data set, MIT restaurant corpus and the MIT movie corpus and compare to neural baselines that do use token-level labels. We achieve competitive results, within a few percentage points of the baselines, showing the feasibility of E2E information extraction without the need for token-level labels. This opens up new possibilities, as for many tasks currently addressed by human extractors, raw input and output data are available, but not token-level labels.",
"title": ""
},
{
"docid": "8730b884da4444c9be6d8c13d7b983e1",
"text": "The design and structure of a self-assembly modular robot (Sambot) are presented in this paper. Each module has its own autonomous mobility and can connect with other modules to form robotic structures with different manipulation abilities. Sambot has a versatile, robust, and flexible structure. The computing platform provided for each module is distributed and consists of a number of interlinked microcontrollers. The interaction and connectivity between different modules is achieved through infrared sensors and Zigbee wireless communication in discrete state and control area network bus communication in robotic configuration state. A new mechanical design is put forth to realize the autonomous motion and docking of Sambots. It is a challenge to integrate actuators, sensors, microprocessors, power units, and communication elements into a highly compact and flexible module with the overall size of 80 mm × 80 mm × 102 mm. The work describes represents a mature development in the area of self-assembly distributed robotics.",
"title": ""
},
{
"docid": "df354ff3f0524d960af7beff4ec0a68b",
"text": "The paper presents digital beamforming for Passive Coherent Location (PCL) radar. The considered circular antenna array is a part of a passive system developed at Warsaw University of Technology. The system is based on FM radio transmitters. The array consists of eight half-wave dipoles arranged in a circular array covering 360deg with multiple beams. The digital beamforming procedure is presented, including mutual coupling correction and antenna pattern optimization. The results of field calibration and measurements are also shown.",
"title": ""
},
{
"docid": "8ee0a87116d700c8ad982f08d8215c1d",
"text": "Game generation systems perform automated, intelligent design of games (i.e. videogames, boardgames), reasoning about both the abstract rule system of the game and the visual realization of these rules. Although, as an instance of the problem of creative design, game generation shares some common research themes with other creative AI systems such as story and art generators, game generation extends such work by having to reason about dynamic, playable artifacts. Like AI work on creativity in other domains, work on game generation sheds light on the human game design process, offering opportunities to make explicit the tacit knowledge involved in game design and test game design theories. Finally, game generation enables new game genres which are radically customized to specific players or situations; notable examples are cell phone games customized for particular users and newsgames providing commentary on current events. We describe an approach to formalizing game mechanics and generating games using those mechanics, using WordNet and ConceptNet to assist in performing common-sense reasoning about game verbs and nouns. Finally, we demonstrate and describe in detail a prototype that designs micro-games in the style of Nintendo’s",
"title": ""
},
{
"docid": "34919dc04bab57299c22d709902aea68",
"text": "In the rank join problem, we are given a set of relations and a scoring function, and the goal is to return the join results with the top k scores. It is often the case in practice that the inputs may be accessed in ranked order and the scoring function is monotonic. These conditions allow for efficient algorithms that solve the rank join problem without reading all of the input. In this article, we present a thorough analysis of such rank join algorithms. A strong point of our analysis is that it is based on a more general problem statement than previous work, making it more relevant to the execution model that is employed by database systems. One of our results indicates that the well-known HRJN algorithm has shortcomings, because it does not stop reading its input as soon as possible. We find that it is NP-hard to overcome this weakness in the general case, but cases of limited query complexity are tractable. We prove the latter with an algorithm that infers provably tight bounds on the potential benefit of reading more input in order to stop as soon as possible. As a result, the algorithm achieves a cost that is within a constant factor of optimal.",
"title": ""
},
{
"docid": "605c6b431b336ebe2ed07e7fcf529121",
"text": "Standard approaches to probabilistic reasoning require that one possesses an explicit model of the distribution in question. But, the empirical learning of models of probability distributions from partial observations is a problem for which efficient algorithms are generally not known. In this work we consider the use of bounded-degree fragments of the “sum-of-squares” logic as a probability logic. Prior work has shown that we can decide refutability for such fragments in polynomial-time. We propose to use such fragments to decide queries about whether a given probability distribution satisfies a given system of constraints and bounds on expected values. We show that in answering such queries, such constraints and bounds can be implicitly learned from partial observations in polynomial-time as well. It is known that this logic is capable of deriving many bounds that are useful in probabilistic analysis. We show here that it furthermore captures key polynomial-time fragments of resolution. Thus, these fragments are also quite expressive.",
"title": ""
},
{
"docid": "40c93dacc8318bc440d23fedd2acbd47",
"text": "An electrical-balance duplexer uses series connected step-down transformers to enhance linearity and power handling capability by reducing the voltage swing across nonlinear components. Wideband, dual-notch Tx-to-Rx isolation is demonstrated experimentally with a planar inverted-F antenna. The 0.18μm CMOS prototype achieves >50dB isolation for 220MHz aggregated bandwidth or >40dB dual-notch isolation for 160MHz bandwidth, +49dBm Tx-path IIP3 and -48dBc ACLR1 for +27dBm at the antenna.",
"title": ""
},
{
"docid": "ba10bfce4c5deabb663b5ca490c320c9",
"text": "OBJECTIVE\nAlthough the relationship between religious practice and health is well established, the relationship between spirituality and health is not as well studied. The objective of this study was to ascertain whether participation in the mindfulness-based stress reduction (MBSR) program was associated with increases in mindfulness and spirituality, and to examine the associations between mindfulness, spirituality, and medical and psychological symptoms.\n\n\nMETHODS\nForty-four participants in the University of Massachusetts Medical School's MBSR program were assessed preprogram and postprogram on trait (Mindful Attention and Awareness Scale) and state (Toronto Mindfulness Scale) mindfulness, spirituality (Functional Assessment of Chronic Illness Therapy--Spiritual Well-Being Scale), psychological distress, and reported medical symptoms. Participants also kept a log of daily home mindfulness practice. Mean changes in scores were computed, and relationships between changes in variables were examined using mixed-model linear regression.\n\n\nRESULTS\nThere were significant improvements in spirituality, state and trait mindfulness, psychological distress, and reported medical symptoms. Increases in both state and trait mindfulness were associated with increases in spirituality. Increases in trait mindfulness and spirituality were associated with decreases in psychological distress and reported medical symptoms. Changes in both trait and state mindfulness were independently associated with changes in spirituality, but only changes in trait mindfulness and spirituality were associated with reductions in psychological distress and reported medical symptoms. No association was found between outcomes and home mindfulness practice.\n\n\nCONCLUSIONS\nParticipation in the MBSR program appears to be associated with improvements in trait and state mindfulness, psychological distress, and medical symptoms. Improvements in trait mindfulness and spirituality appear, in turn, to be associated with improvements in psychological and medical symptoms.",
"title": ""
},
{
"docid": "4ddf4cf69d062f7ea1da63e68c316f30",
"text": "The Di†use Infrared Background Experiment (DIRBE) on the Cosmic Background Explorer (COBE) spacecraft was designed primarily to conduct a systematic search for an isotropic cosmic infrared background (CIB) in 10 photometric bands from 1.25 to 240 km. The results of that search are presented here. Conservative limits on the CIB are obtained from the minimum observed brightness in all-sky maps at each wavelength, with the faintest limits in the DIRBE spectral range being at 3.5 km (lIl \\ 64 nW m~2 sr~1, 95% conÐdence level) and at 240 km nW m~2 sr~1, 95% conÐdence level). The (lIl\\ 28 bright foregrounds from interplanetary dust scattering and emission, stars, and interstellar dust emission are the principal impediments to the DIRBE measurements of the CIB. These foregrounds have been modeled and removed from the sky maps. Assessment of the random and systematic uncertainties in the residuals and tests for isotropy show that only the 140 and 240 km data provide candidate detections of the CIB. The residuals and their uncertainties provide CIB upper limits more restrictive than the dark sky limits at wavelengths from 1.25 to 100 km. No plausible solar system or Galactic source of the observed 140 and 240 km residuals can be identiÐed, leading to the conclusion that the CIB has been detected at levels of and 14^ 3 nW m~2 sr~1 at 140 and 240 km, respectively. The intelIl\\ 25 ^ 7 grated energy from 140 to 240 km, 10.3 nW m~2 sr~1, is about twice the integrated optical light from the galaxies in the Hubble Deep Field, suggesting that star formation might have been heavily enshrouded by dust at high redshift. The detections and upper limits reported here provide new constraints on models of the history of energy-releasing processes and dust production since the decoupling of the cosmic microwave background from matter. Subject headings : cosmology : observations È di†use radiation È infrared : general",
"title": ""
},
{
"docid": "7d62ae437a6b77e19f0d3292954a8471",
"text": "A numerical tool for the optimisation of the scantlings of a ship is extended by considering production cost, weight and moment of inertia in the objective function. A multi-criteria optimisation of a passenger ship is conducted to illustrate the analysis process. Pareto frontiers are obtained and results are verified with Bureau Veritas rules.",
"title": ""
},
{
"docid": "93bebbc1112dbfd34fce1b3b9d228f9a",
"text": "UNLABELLED\nThere has been no established qualitative system of interpretation for therapy response assessment using PET/CT for head and neck cancers. The objective of this study was to validate the Hopkins interpretation system to assess therapy response and survival outcome in head and neck squamous cell cancer patients (HNSCC).\n\n\nMETHODS\nThe study included 214 biopsy-proven HNSCC patients who underwent a posttherapy PET/CT study, between 5 and 24 wk after completion of treatment. The median follow-up was 27 mo. PET/CT studies were interpreted by 3 nuclear medicine physicians, independently. The studies were scored using a qualitative 5-point scale, for the primary tumor, for the right and left neck, and for overall assessment. Scores 1, 2, and 3 were considered negative for tumors, and scores 4 and 5 were considered positive for tumors. The Cohen κ coefficient (κ) was calculated to measure interreader agreement. Overall survival (OS) and progression-free survival (PFS) were analyzed by Kaplan-Meier plots with a Mantel-Cox log-rank test and Gehan Breslow Wilcoxon test for comparisons.\n\n\nRESULTS\nOf the 214 patients, 175 were men and 39 were women. There was 85.98%, 95.33%, 93.46%, and 87.38% agreement between the readers for overall, left neck, right neck, and primary tumor site response scores, respectively. The corresponding κ coefficients for interreader agreement between readers were, 0.69-0.79, 0.68-0.83, 0.69-0.87, and 0.79-0.86 for overall, left neck, right neck, and primary tumor site response, respectively. The sensitivity, specificity, positive predictive value, negative predictive value, and overall accuracy of the therapy assessment were 68.1%, 92.2%, 71.1%, 91.1%, and 86.9%, respectively. Cox multivariate regression analysis showed human papillomavirus (HPV) status and PET/CT interpretation were the only factors associated with PFS and OS. Among the HPV-positive patients (n = 123), there was a significant difference in PFS (hazard ratio [HR], 0.14; 95% confidence interval, 0.03-0.57; P = 0.0063) and OS (HR, 0.01; 95% confidence interval, 0.00-0.13; P = 0.0006) between the patients who had a score negative for residual tumor versus positive for residual tumor. A similar significant difference was observed in PFS and OS for all patients. There was also a significant difference in the PFS of patients with PET-avid residual disease in one site versus multiple sites in the neck (HR, 0.23; log-rank P = 0.004).\n\n\nCONCLUSION\nThe Hopkins 5-point qualitative therapy response interpretation criteria for head and neck PET/CT has substantial interreader agreement and excellent negative predictive value and predicts OS and PFS in patients with HPV-positive HNSCC.",
"title": ""
},
{
"docid": "86aaee95a4d878b53fd9ee8b0735e208",
"text": "The tensegrity concept has long been considered as a basis for lightweight and compact packaging deployable structures, but very few studies are available. This paper presents a complete design study of a deployable tensegrity mast with all the steps involved: initial formfinding, structural analysis, manufacturing and deployment. Closed-form solutions are used for the formfinding. A manufacturing procedure in which the cables forming the outer envelope of the mast are constructed by two-dimensional weaving is used. The deployment of the mast is achieved through the use of self-locking hinges. A stiffness comparison between the tensegrity mast and an articulated truss mast shows that the tensegrity mast is weak in bending.",
"title": ""
},
{
"docid": "046207a87b7b01f6bc12f08a195670b9",
"text": "Text normalization is the task of transforming lexical variants to their canonical forms. We model the problem of text normalization as a character-level sequence to sequence learning problem and present a neural encoder-decoder model for solving it. To train the encoder-decoder model, many sentences pairs are generally required. However, Japanese non-standard canonical pairs are scarce in the form of parallel corpora. To address this issue, we propose a method of data augmentation to increase data size by converting existing resources into synthesized non-standard forms using handcrafted rules. We conducted an experiment to demonstrate that the synthesized corpus contributes to stably train an encoder-decoder model and improve the performance of Japanese text normalization.",
"title": ""
},
{
"docid": "a5cd94446abfc46c6d5c4e4e376f1e0a",
"text": "Commitment problem in credit market and its eãects on economic growth are discussed. Completions of investment projects increase capital stock of the economy. These projects require credits which are ånanced by ånacial intermediaries. A simpliåed credit model of Dewatripont and Maskin is used to describe the ånancing process, in which the commitment problem or the \\soft budget constraint\" problem arises. However, in dynamic general equilibrium setup with endougenous determination of value and cost of projects, there arise multiple equilibria in the project ånancing model, namely reånancing equilirium and no-reånancing equilibrium. The former leads the economy to the stationary state with smaller capital stock level than the latter. Both the elimination of reånancing equilibrium and the possibility of \\Animal Spirits Cycles\" equilibrium are also discussed.",
"title": ""
},
{
"docid": "43831e29e62c574a93b6029409690bfe",
"text": "We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)-equivariant convolutions utilize kernels which are parameterized as a linear combination of a complete steerable kernel basis, which is derived analytically in this paper. We prove that equivariant convolutions are the most general equivariant linear maps between fields over R. Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.",
"title": ""
},
{
"docid": "257eca5511b1657f4a3cd2adff1989f8",
"text": "The monitoring of volcanoes is mainly performed by sensors installed on their structures, aiming at recording seismic activities and reporting them to observatories to be later analyzed by specialists. However, due to the high volume of data continuously collected, the use of automatic techniques is an important requirement to support real time analyses. In this sense, a basic but challenging task is the classification of seismic activities to identify signals yielded by different sources as, for instance, the movement of magmatic fluids. Although there exists several approaches proposed to perform such task, they were mainly designed to deal with raw signals. In this paper, we present a 2D approach developed considering two main steps. Firstly, spectrograms for every collected signal are calculated by using Fourier Transform. Secondly, we set a deep neural network to discriminate seismic activities by analyzing the spectrogram shapes. As a consequence, our classifier provided outstanding results with accuracy rates greater than 95%.",
"title": ""
},
{
"docid": "c6347c06d84051023baaab39e418fb65",
"text": "This paper presents a complete approach to a successful utilization of a high-performance extreme learning machines (ELMs) Toolbox for Big Data. It summarizes recent advantages in algorithmic performance; gives a fresh view on the ELM solution in relation to the traditional linear algebraic performance; and reaps the latest software and hardware performance achievements. The results are applicable to a wide range of machine learning problems and thus provide a solid ground for tackling numerous Big Data challenges. The included toolbox is targeted at enabling the full potential of ELMs to the widest range of users.",
"title": ""
},
{
"docid": "b50ea06c20fb22d7060f08bc86d9d6ca",
"text": "The advent of the Social Web has provided netizens with new tools for creating and sharing, in a time- and cost-efficient way, their contents, ideas, and opinions with virtually the millions of people connected to the World Wide Web. This huge amount of information, however, is mainly unstructured as specifically produced for human consumption and, hence, it is not directly machine-processable. In order to enable a more efficient passage from unstructured information to structured data, aspect-based opinion mining models the relations between opinion targets contained in a document and the polarity values associated with these. Because aspects are often implicit, however, spotting them and calculating their respective polarity is an extremely difficult task, which is closer to natural language understanding rather than natural language processing. To this end, Sentic LDA exploits common-sense reasoning to shift LDA clustering from a syntactic to a semantic level. Rather than looking at word co-occurrence frequencies, Sentic LDA leverages on the semantics associated with words and multi-word expressions to improve clustering and, hence, outperform state-of-the-art techniques for aspect extraction.",
"title": ""
},
{
"docid": "ef7e0be7ec3af89c5f8f5a050c52ed9a",
"text": "We approach recognition in the framework of deformable shape matching, relying on a new algorithm for finding correspondences between feature points. This algorithm sets up correspondence as an integer quadratic programming problem, where the cost function has terms based on similarity of corresponding geometric blur point descriptors as well as the geometric distortion between pairs of corresponding feature points. The algorithm handles outliers, and thus enables matching of exemplars to query images in the presence of occlusion and clutter. Given the correspondences, we estimate an aligning transform, typically a regularized thin plate spline, resulting in a dense correspondence between the two shapes. Object recognition is handled in a nearest neighbor framework where the distance between exemplar and query is the matching cost between corresponding points. We show results on two datasets. One is the Caltech 101 dataset (Li, Fergus and Perona), a challenging dataset with large intraclass variation. Our approach yields a 45% correct classification rate in addition to localization. We also show results for localizing frontal and profile faces that are comparable to special purpose approaches tuned to faces.",
"title": ""
}
] |
scidocsrr
|
f2be6f6f08cbf168403ebedc0c3a7152
|
Blinkering surveillance: Enabling video privacy through computer vision
|
[
{
"docid": "34627572a319dfdfcea7277d2650d0f5",
"text": "Visual speech information from the speaker’s mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audio-visual automatic speech recognition and present novel contributions in two main areas: First, the visual front end design, based on a cascade of linear image transforms of an appropriate video region-of-interest, and subsequently, audio-visual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audio-visual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audio-visual adaptation. We apply our algorithms to three multi-subject bimodal databases, ranging from smallto largevocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves automatic speech recognition over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks.",
"title": ""
}
] |
[
{
"docid": "9af22f6a1bbb4cbb13508b654e5fd7a5",
"text": "We present a 3-D correspondence method to match the geometric extremities of two shapes which are partially isometric. We consider the most general setting of the isometric partial shape correspondence problem, in which shapes to be matched may have multiple common parts at arbitrary scales as well as parts that are not similar. Our rank-and-vote-and-combine algorithm identifies and ranks potentially correct matches by exploring the space of all possible partial maps between coarsely sampled extremities. The qualified top-ranked matchings are then subjected to a more detailed analysis at a denser resolution and assigned with confidence values that accumulate into a vote matrix. A minimum weight perfect matching algorithm is finally iterated to combine the accumulated votes into an optimal (partial) mapping between shape extremities, which can further be extended to a denser map. We test the performance of our method on several data sets and benchmarks in comparison with state of the art.",
"title": ""
},
{
"docid": "ca683d498e690198ca433050c3d91fd0",
"text": "Cross-graph Relational Learning (CGRL) refers to the problem of predicting the strengths or labels of multi-relational tuples of heterogeneous object types, through the joint inference over multiple graphs which specify the internal connections among each type of objects. CGRL is an open challenge in machine learning due to the daunting number of all possible tuples to deal with when the numbers of nodes in multiple graphs are large, and because the labeled training instances are extremely sparse as typical. Existing methods such as tensor factorization or tensor-kernel machines do not work well because of the lack of convex formulation for the optimization of CGRL models, the poor scalability of the algorithms in handling combinatorial numbers of tuples, and/or the non-transductive nature of the learning methods which limits their ability to leverage unlabeled data in training. This paper proposes a novel framework which formulates CGRL as a convex optimization problem, enables transductive learning using both labeled and unlabeled tuples, and offers a scalable algorithm that guarantees the optimal solution and enjoys a linear time complexity with respect to the sizes of input graphs. In our experiments with a subset of DBLP publication records and an Enzyme multi-source dataset, the proposed method successfully scaled to the large cross-graph inference problem, and outperformed other representative approaches significantly.",
"title": ""
},
{
"docid": "e31749775e64d5407a090f5fd0a275cf",
"text": "This paper focuses on presenting a human-in-the-loop reinforcement learning theory framework and foreseeing its application to driving decision making. Currently, the technologies in human-vehicle collaborative driving face great challenges, and do not consider the Human-in-the-loop learning framework and Driving Decision-Maker optimization under the complex road conditions. The main content of this paper aimed at presenting a study framework as follows: (1) the basic theory and model of the hybrid reinforcement learning; (2) hybrid reinforcement learning algorithm for human drivers; (3)hybrid reinforcement learning algorithm for autopilot; (4) Driving decision-maker verification platform. This paper aims at setting up the human-machine hybrid reinforcement learning theory framework and foreseeing its solutions to two kinds of typical difficulties about human-machine collaborative Driving Decision-Maker, which provides the basic theory and key technologies for the future of intelligent driving. The paper serves as a potential guideline for the study of human-in-the-loop reinforcement learning.",
"title": ""
},
{
"docid": "618ef5ddb544548639b80a495897284a",
"text": "UNLABELLED\nCoccydynia is pain in the coccygeal region, and usually treated conservatively. Extracorporeal shock wave therapy (ESWT) was incorporated as non-invasive treatment of many musculoskeletal conditions. However, the effects of ESWT on coccydynia are less discussed. The purpose of this study is to evaluate the effects of ESWT on the outcomes of coccydynia. Patients were allocated to ESWT (n = 20) or physical modality (SIT) group (n = 21) randomly, and received total treatment duration of 4 weeks. The visual analog scale (VAS), Oswestry disability index (ODI), and self-reported satisfaction score were used to assess treatment effects. The VAS and ODI scores were significantly decreased after treatment in both groups, and the decrease in the VAS score was significantly greater in the ESWT group. The mean proportional changes in the ODI scores were greater in the ESWT group than in the SIT group, but the between-group difference was not statistically significant. The patients in the ESWT group had significantly higher subjective satisfaction scores than SIT group. We concluded that ESWT is more effective and satisfactory in reducing discomfort and disability caused by coccydynia than the use of physical modalities. Thus, ESWT is recommended as an alternative treatment option for patients with coccydynia.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02313324.",
"title": ""
},
{
"docid": "331fbc1b16722669ff83321c7e7fe9b8",
"text": "Coupled-inductor interleaved boost converters are under development for high-current, high-power applications ranging from automotive to distributed generation. The operating modes of these coupled-inductor converters can be complex. This paper presents an investigation of the various continuous-current (CCM) and discontinuous-current (DCM) modes of operation of the coupled-inductor interleaved two-phase boost converter. The various CCM and DCM of the converter are identified together with their submodes of operation. The standard discrete-inductor interleaved two-phase boost can be seen as a subset of the coupled-inductor converter family with zero mutual coupling between the phases. The steady-state operating characteristics, equations and waveforms for the many CCM and DCM will be presented for the converter family. Mode maps will be developed to map the converter operation across the modes over the operating range. Experimental validation is presented from a 3.6 kW laboratory prototype. Design considerations and experimental results are presented for a 72 kW prototype.",
"title": ""
},
{
"docid": "70d69b3933393decd4bdb1e4e21fe07e",
"text": "The population living in cities is continuously increasing worldwide. In developing countries, this phenomenon is exacerbated by poverty, leading to tremendous problems of employment, immigration from the rural areas, transportation, food supply and environment protection. Simultaneously with the growth of cities, a new type of agriculture has emerged; namely, urban agriculture. Here, the main functions of urban agriculture are described: its social roles, the economic functions as part of its multi-functionality, the constraints, and the risks for human consumption and the living environment. We highlight the following major points. (1) Agricultural activity will continue to be a strong contributor to urban households. Currently, differences between rural and urban livelihood households appear to be decreasing. (2) Urban agricultural production includes aquaculture, livestock and plants. The commonest crops are perishable leafy vegetables, particularly in South-east Asia and Africa. These vegetable industries have short marketing chains with lower price differentials between farmers and consumers than longer chains. The city food supply function is one of the various roles and objectives of urban agriculture that leads to increasing dialogue between urban dwellers, city authorities and farmers. (3) One of the farmers’ issues is to produce high quality products in highly populated areas and within a polluted environment. Agricultural production in cities faces the following challenges: access to the main agricultural inputs, fertilizers and water; production in a polluted environment; and limitation of its negative impact on the environment. Urban agriculture can reuse city wastes, but this will not be enough to achieve high yields, and there is still a risk of producing unsafe products. These are the main challenges for urban agriculture in keeping its multi-functional activities such as cleansing, opening up the urban space, and producing fresh and nutritious food.",
"title": ""
},
{
"docid": "ae0474dc41871a28cc3b62dfd672ad0a",
"text": "Recent success in deep learning has generated immense interest among practitioners and students, inspiring many to learn about this new technology. While visual and interactive approaches have been successfully developed to help people more easily learn deep learning, most existing tools focus on simpler models. In this work, we present GAN Lab, the first interactive visualization tool designed for non-experts to learn and experiment with Generative Adversarial Networks (GANs), a popular class of complex deep learning models. With GAN Lab, users can interactively train generative models and visualize the dynamic training process's intermediate results. GAN Lab tightly integrates an model overview graph that summarizes GAN's structure, and a layered distributions view that helps users interpret the interplay between submodels. GAN Lab introduces new interactive experimentation features for learning complex deep learning models, such as step-by-step training at multiple levels of abstraction for understanding intricate training dynamics. Implemented using TensorFlow.js, GAN Lab is accessible to anyone via modern web browsers, without the need for installation or specialized hardware, overcoming a major practical challenge in deploying interactive tools for deep learning.",
"title": ""
},
{
"docid": "f91ba4b37a2a9d80e5db5ace34e6e50a",
"text": "Bearing currents and shaft voltages of an induction motor are measured under hardand soft-switching inverter excitation. The objective is to investigate whether the soft-switching technologies can provide solutions for reducing the bearing currents and shaft voltages. Two of the prevailing soft-switching inverters, the resonant dc-link inverter and the quasi-resonant dc-link inverter, are tested. The results are compared with those obtained using the conventional hard-switching inverter. To ensure objective comparisons between the softand hard-switching inverters, all inverters were configured identically and drove the same induction motor under the same operating conditions when the test data were collected. An insightful explanation of the experimental results is also provided to help understand the mechanisms of bearing currents and shaft voltages produced in the inverter drives. Consistency between the bearing current theory and the experimental results has been demonstrated. Conclusions are then drawn regarding the effectiveness of the soft-switching technologies as a solution to the bearing current and shaft voltage problems.",
"title": ""
},
{
"docid": "7a8619e3adf03c8b00a3e830c3f1170b",
"text": "We present a robot-pose-registration algorithm, which is entirely based on large planar-surface patches extracted from point clouds sampled from a three-dimensional (3-D) sensor. This approach offers an alternative to the traditional point-to-point iterative-closest-point (ICP) algorithm, its point-to-plane variant, as well as newer grid-based algorithms, such as the 3-D normal distribution transform (NDT). The simpler case of known plane correspondences is tackled first by deriving expressions for least-squares pose estimation considering plane-parameter uncertainty computed during plane extraction. Closed-form expressions for covariances are also derived. To round-off the solution, we present a new algorithm, which is called minimally uncertain maximal consensus (MUMC), to determine the unknown plane correspondences by maximizing geometric consistency by minimizing the uncertainty volume in configuration space. Experimental results from three 3-D sensors, viz., Swiss-Ranger, University of South Florida Odetics Laser Detection and Ranging, and an actuated SICK S300, are given. The first two have low fields of view (FOV) and moderate ranges, while the third has a much bigger FOV and range. Experimental results show that this approach is not only more robust than point- or grid-based approaches in plane-rich environments, but it is also faster, requires significantly less memory, and offers a less-cluttered planar-patches-based visualization.",
"title": ""
},
{
"docid": "48a75e28154d630da14fd3dba09d0af8",
"text": "Over the years, artificial intelligence (AI) is spreading its roots in different areas by utilizing the concept of making the computers learn and handle complex tasks that previously require substantial laborious tasks by human beings. With better accuracy and speed, AI is helping lawyers to streamline work processing. New legal AI software tools like Catalyst, Ross intelligence, and Matlab along with natural language processing provide effective quarrel resolution, better legal clearness, and superior admittance to justice and fresh challenges to conventional law firms providing legal services using leveraged cohort correlate model. This paper discusses current applications of legal AI and suggests deep learning and machine learning techniques that can be applied in future to simplify the cumbersome legal tasks.",
"title": ""
},
{
"docid": "56255e2f0f1fb76267d0a1002763e573",
"text": "Recent technology surveys identified flash light detection and ranging technology as the best choice for the navigation and landing of spacecrafts in extraplanetary missions, working from single-point altimeter to range-imaging camera mode. Among all available technologies for a 2D array of direct time-of-flight (DTOF) pixels, CMOS single-photon avalanche diodes (SPADs) represent the ideal candidate due to their rugged design and electronics integration. However, state-of-the-art SPAD imagers are not designed for operation over a wide variety of scenarios, including variable background light, very long to short range, or fast relative movement.",
"title": ""
},
{
"docid": "0d7ce42011c48232189c791e71c289f5",
"text": "RECENT WORK in virtue ethics, particularly sustained reflection on specific virtues, makes it possible to argue that the classical list of cardinal virtues (prudence, justice, temperance, and fortitude) is inadequate, and that we need to articulate the cardinal virtues more correctly. With that end in view, the first section of this article describes the challenges of espousing cardinal virtues today, the second considers the inadequacy of the classical listing of cardinal virtues, and the third makes a proposal. Since virtues, no matter how general, should always relate to concrete living, the article is framed by a case.",
"title": ""
},
{
"docid": "29b4a9f3b3da3172e319d11b8f938a7b",
"text": "Since social media have become very popular during the past few years, researchers have been focusing on being able to automatically process and extract sentiments information from large volume of social media data. This paper contributes to the topic, by focusing on sentiment analysis for Chinese social media. In this paper, we propose to rely on Part of Speech (POS) tags in order to extract unigrams and bigrams features. Bigrams are generated according to the grammatical relation between consecutive words. With those features, we have shown that focusing on a specific topic allows to reach higher estimation accuracy.",
"title": ""
},
{
"docid": "8f01d2e70ec5da655418a6864e94b932",
"text": "Cloud storage services allow users to outsource their data to cloud servers to save on local data storage costs. However, unlike using local storage devices, users don't physically own the data stored on cloud servers and can't be certain about the integrity of the cloud-stored data. Many public verification schemes have been proposed to allow a third-party auditor to verify the integrity of outsourced data. However, most of these schemes assume that the auditors are honest and reliable, so are vulnerable to malicious auditors. Moreover, in most of these schemes, an external adversary could modify the outsourced data and tamper with the interaction messages between the cloud server and the auditor, thus invalidating the outsourced data integrity verification. This article proposes an efficient and secure public verification of data integrity scheme that protects against external adversaries and malicious auditors. The proposed scheme adopts a random masking technique to protect against external adversaries, and requires users to audit auditors' behaviors to prevent malicious auditors from fabricating verification results. It uses Bitcoin to construct unbiased challenge messages to thwart collusion between malicious auditors and cloud servers. A performance analysis demonstrates that the proposed scheme is efficient in terms of the user's auditing overhead.",
"title": ""
},
{
"docid": "8b02f168b2021287848b413ffb297636",
"text": "BACKGROUND\nIdentification of patient at risk of subglottic infantile hemangioma (IH) is challenging because subglottic IH can grow fast and cause airway obstruction with a fatal course.\n\n\nOBJECTIVE\nTo refine the cutaneous IH pattern at risk of subglottic IH.\n\n\nMETHODS\nProspective and retrospective review of patients with cutaneous IH involving the beard area. IHs were classified in the bilateral pattern group (BH) or in the unilateral pattern group (UH). Infantile hemangioma topography, subtype (telangiectatic or tuberous), ear, nose and throat (ENT) manifestations and subglottic involvement were recorded.\n\n\nRESULTS\nThirty-one patients (21 BH and 10 UH) were included during a 20-year span. Nineteen patients (16 BH and 3 UH) had subglottic hemangioma. BH and UH group overlap on the median pattern (tongue, gum, lips, chin and neck). Median pattern, particularly the neck area and telangiectatic subtype of IH were significantly associated with subglottic involvement.\n\n\nCONCLUSION\nPatients presenting with telangiectatic beard IH localized on the median area need early ENT exploration. They should be treated before respiratory symptoms occur.",
"title": ""
},
{
"docid": "77e385b7e7305ec0553c980f22bfa3b4",
"text": "Two and three-dimensional simulations of experiments on atmosphere mixing and stratification in a nuclear power plant containment were performed with the code CFX4.4, with the inclusion of simple models for steam condensation. The purpose was to assess the applicability of the approach to simulate the behaviour of light gases in containments at accident conditions. The comparisons of experimental and simulated results show that, despite a tendency to simulate more intensive mixing, the proposed approach may replicate the non-homogeneous structure of the atmosphere reasonably well. Introduction One of the nuclear reactor safety issues that have lately been considered using Computational Fluid Dynamics (CFD) codes is the problem of predicting the eventual non-homogeneous concentration of light flammable gas (hydrogen) in the containment of a nuclear power plant (NPP) at accident conditions. During a hypothetical severe accident in a Pressurized Water Reactor NPP, hydrogen could be generated due to Zircaloy oxidation in the reactor core. Eventual high concentrations of hydrogen in some parts of the containment could cause hydrogen ignition and combustion, which could threaten the containment integrity. The purpose of theoretical investigations is to predict hydrogen behaviour at accident conditions prior to combustion. In the past few years, many investigations about the possible application of CFD codes for this purpose have been started [1-5]. CFD codes solve the transport mass, momentum and energy equations when a fluid system is modelled using local instantaneous description. Some codes, which also use local instantaneous description, have been developed specifically for nuclear applications [68]. Although many CFD codes are multi-purpose, some of them still lack some models, which are necessary for adequate simulations of containment phenomena. In particular, the modelling of steam condensation often has to be incorporated in the codes by the users. These theoretical investigations are complemented by adequate experiments. Recently, the following novel integral experimental facilities have been set up in Europe: TOSQAN [9,10], at the Institut de Radioprotection et de Sureté Nucléaire (IRSN) in Saclay (France), MISTRA [9,11], at the",
"title": ""
},
{
"docid": "edd25b7f6c031161afc81cc6013ba58a",
"text": "This paper presents a method for airport detection from optical satellite images using deep convolutional neural networks (CNN). To achieve fast detection with high accuracy, region proposal by searching adjacent parallel line segments has been applied to select candidate fields with potential runways. These proposals were further classified by a CNN model transfer learned from AlexNet to identify the final airport regions from other confusing classes. The proposed method has been tested on a remote sensing dataset consisting of 120 airports. Experiments showed that the proposed method could recognize airports from a large complex area in seconds with an accuracy of 84.1%.",
"title": ""
},
{
"docid": "9a033f2ba2dc67f7beb2a86c13f91793",
"text": "Plasticity is an intrinsic property of the human brain and represents evolution's invention to enable the nervous system to escape the restrictions of its own genome and thus adapt to environmental pressures, physiologic changes, and experiences. Dynamic shifts in the strength of preexisting connections across distributed neural networks, changes in task-related cortico-cortical and cortico-subcortical coherence and modifications of the mapping between behavior and neural activity take place in response to changes in afferent input or efferent demand. Such rapid, ongoing changes may be followed by the establishment of new connections through dendritic growth and arborization. However, they harbor the danger that the evolving pattern of neural activation may in itself lead to abnormal behavior. Plasticity is the mechanism for development and learning, as much as a cause of pathology. The challenge we face is to learn enough about the mechanisms of plasticity to modulate them to achieve the best behavioral outcome for a given subject.",
"title": ""
},
{
"docid": "935a576ef026c6891f9ba77ac6dc2507",
"text": "This is Part II of two papers evaluating the feasibility of providing all energy for all purposes (electric power, transportation, and heating/cooling), everywhere in the world, from wind, water, and the sun (WWS). In Part I, we described the prominent renewable energy plans that have been proposed and discussed the characteristics of WWS energy systems, the global demand for and availability of WWS energy, quantities and areas required for WWS infrastructure, and supplies of critical materials. Here, we discuss methods of addressing the variability of WWS energy to ensure that power supply reliably matches demand (including interconnecting geographically dispersed resources, using hydroelectricity, using demand-response management, storing electric power on site, over-sizing peak generation capacity and producing hydrogen with the excess, storing electric power in vehicle batteries, and forecasting weather to project energy supplies), the economics of WWS generation and transmission, the economics of WWS use in transportation, and policy measures needed to enhance the viability of a WWS system. We find that the cost of energy in a 100% WWS will be similar to the cost today. We conclude that barriers to a 100% conversion to WWS power worldwide are primarily social and political, not technological or even economic. & 2010 Elsevier Ltd. All rights reserved. 1. Variability and reliability in a 100% WWS energy system in all regions of the world One of the major concerns with the use of energy supplies, such as wind, solar, and wave power, which produce variable output is whether such supplies can provide reliable sources of electric power second-by-second, daily, seasonally, and yearly. A new WWS energy infrastructure must be able to provide energy on demand at least as reliably as does the current infrastructure (e.g., De Carolis and Keith, 2005). In general, any electricity system must be able to respond to changes in demand over seconds, minutes, hours, seasons, and years, and must be able to accommodate unanticipated changes in the availability of generation. With the current system, electricity-system operators use ‘‘automatic generation control’’ (AGC) (or frequency regulation) to respond to variation on the order of seconds to a few minutes; spinning reserves to respond to variation on the order of minutes to an hour; and peak-power generation to respond to hourly variation (De Carolis and Keith, 2005; Kempton and Tomic, 2005a; Electric Power Research Institute, 1997). AGC and spinning reserves have very low ll rights reserved. Delucchi), cost, typically less than 10% of the total cost of electricity (Kempton and Tomic, 2005a), and are likely to remain this inexpensive even with large amounts of wind power (EnerNex, 2010; DeCesaro et al., 2009), but peak-power generation can be very expensive. The main challenge for the current electricity system is that electric power demand varies during the day and during the year, while most supply (coal, nuclear, and geothermal) is constant during the day, which means that there is a difference to be made up by peakand gap-filling resources such as natural gas and hydropower. Another challenge to the current system is that extreme events and unplanned maintenance can shut down plants unexpectedly. For example, unplanned maintenance can shut down coal plants, extreme heat waves can cause cooling water to warm sufficiently to shut down nuclear plants, supply disruptions can curtail the availability of natural gas, and droughts can reduce the availability of hydroelectricity. A WWS electricity system offers new challenges but also new opportunities with respect to reliably meeting energy demands. On the positive side, WWS technologies generally suffer less downtime than do current electric power technologies. For example, the average coal plant in the US from 2000 to 2004 was down 6.5% of the year for unscheduled maintenance and 6.0% of the year for scheduled maintenance (North American Electric Reliability Corporation, 2009a), but modern wind turbines have a down time of only 0–2% over land and 0–5% over the ocean (Dong Energy et al., M.A. Delucchi, M.Z. Jacobson / Energy Policy 39 (2011) 1170–119",
"title": ""
},
{
"docid": "da61b8bd6c1951b109399629f47dad16",
"text": "In this paper, we introduce an approach for distributed nonlinear control of multiple hovercraft-type underactuated vehicles with bounded and unidirectional inputs. First, a bounded nonlinear controller is given for stabilization and tracking of a single vehicle, using a cascade backstepping method. Then, this controller is combined with a distributed gradient-based control for multi-vehicle formation stabilization using formation potential functions previously constructed. The vehicles are used in the Caltech Multi-Vehicle Wireless Testbed (MVWT). We provide simulation and experimental results for stabilization and tracking of a single vehicle, and a simulation of stabilization of a six-vehicle formation, demonstrating that in all cases the control bounds and the control objective are satisfied.",
"title": ""
}
] |
scidocsrr
|
ab5a77ba322b65ba51c20ce4b8c7e400
|
A comparative survey of artificial intelligence applications in finance: artificial neural networks, expert system and hybrid intelligent systems
|
[
{
"docid": "33df3da22e9a24767c68e022bb31bbe5",
"text": "The credit card industry has been growing rapidly recently, and thus huge numbers of consumers’ credit data are collected by the credit department of the bank. The credit scoring manager often evaluates the consumer’s credit with intuitive experience. However, with the support of the credit classification model, the manager can accurately evaluate the applicant’s credit score. Support Vector Machine (SVM) classification is currently an active research area and successfully solves classification problems in many domains. This study used three strategies to construct the hybrid SVM-based credit scoring models to evaluate the applicant’s credit score from the applicant’s input features. Two credit datasets in UCI database are selected as the experimental data to demonstrate the accuracy of the SVM classifier. Compared with neural networks, genetic programming, and decision tree classifiers, the SVM classifier achieved an identical classificatory accuracy with relatively few input features. Additionally, combining genetic algorithms with SVM classifier, the proposed hybrid GA-SVM strategy can simultaneously perform feature selection task and model parameters optimization. Experimental results show that SVM is a promising addition to the existing data mining methods. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ee11c968b4280f6da0b1c0f4544bc578",
"text": "A report is presented of some results of an ongoing project using neural-network modeling and learning techniques to search for and decode nonlinear regularities in asset price movements. The author focuses on the case of IBM common stock daily returns. Having to deal with the salient features of economic data highlights the role to be played by statistical inference and requires modifications to standard learning techniques which may prove useful in other contexts.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "c0d4538f34499d19f14c3adba8527280",
"text": "OBJECTIVE\nTo consider the use of the diagnostic category 'complex posttraumatic stress disorder' (c-PTSD) as detailed in the forthcoming ICD-11 classification system as a less stigmatising, more clinically useful term, instead of the current DSM-5 defined condition of 'borderline personality disorder' (BPD).\n\n\nCONCLUSIONS\nTrauma, in its broadest definition, plays a key role in the development of both c-PTSD and BPD. Given this current lack of differentiation between these conditions, and the high stigma faced by people with BPD, it seems reasonable to consider using the diagnostic term 'complex posttraumatic stress disorder' to decrease stigma and provide a trauma-informed approach for BPD patients.",
"title": ""
},
{
"docid": "14b9aaa9ff0be3ed0a8d420fb63f54dd",
"text": "Stream reasoning studies the application of inference techniques to data characterised by being highly dynamic. It can find application in several settings, from Smart Cities to Industry 4.0, from Internet of Things to Social Media analytics. This year stream reasoning turns ten, and in this article we analyse its growth. In the first part, we trace the main results obtained so far, by presenting the most prominent studies. We start by an overview of the most relevant studies developed in the context of semantic web, and then we extend the analysis to include contributions from adjacent areas, such as database and artificial intelligence. Looking at the past is useful to prepare for the future: in the second part, we present a set of open challenges and issues that stream reasoning will face in the next future.",
"title": ""
},
{
"docid": "6d15f9766e35b2c78ce5402ed44cdf57",
"text": "Models that acquire semantic representations from both linguistic and perceptual input are of interest to researchers in NLP because of the obvious parallels with human language learning. Performance advantages of the multi-modal approach over language-only models have been clearly established when models are required to learn concrete noun concepts. However, such concepts are comparatively rare in everyday language. In this work, we present a new means of extending the scope of multi-modal models to more commonly-occurring abstract lexical concepts via an approach that learns multimodal embeddings. Our architecture outperforms previous approaches in combining input from distinct modalities, and propagates perceptual information on concrete concepts to abstract concepts more effectively than alternatives. We discuss the implications of our results both for optimizing the performance of multi-modal models and for theories of abstract conceptual representation.",
"title": ""
},
{
"docid": "4d8a26e88a77861c7a2185cdd3d0694b",
"text": "Transcript profiling of hsr203J, a known marker gene for Hypersensitive response (HR), was performed to delineate its role in differential defense against Alternaria brassicae in tolerant and susceptible genotypes of Brassica juncea. Reverse transcriptase (RT) PCR approach was utilized to investigate the correlation between expression of hsr203J like gene(s) and pathogenesis in stage dependent manner. It was revealed that the expression of hsr203J like gene increased as disease progressed from initial too late stage of infection in tolerant genotype. However, in susceptible genotype, its expression increased up to middle stage of infection with no expression in late stage of infection. In both genotypes, no expression of hsr203J like gene was observed in healthy leaves. It was observed that whereas three homologues of hsr203J like gene express at the late stage of infection in tolerant genotype, only single homologue of same expresses in susceptible genotype throughout all stages of infection. This indicates the role of hsr203J homologues in determining the differential defense response against Alternaria blight in Brassica. Determination of specific activity and in-gel assay revealed differential accumulation of protease and protease inhibitor in tolerant and susceptible genotypes at different stages of infection. Induction of differential protease and protease inhibitor activity appears to modulate the cell death during HR response to pathogen. Dissection of pathway leading to HR related cell death will enable us to know the molecular basis of disease resistance which will, in turn, help in engineering Brassica for resistance to Alternaria blight.",
"title": ""
},
{
"docid": "5551c139bf9bdb144fabce6a20fda331",
"text": "A common prerequisite for a number of debugging and performanceanalysis techniques is the injection of auxiliary program code into the application under investigation, a process called instrumentation. To accomplish this task, source-code preprocessors are often used. Unfortunately, existing preprocessing tools either focus only on a very specific aspect or use hard-coded commands for instrumentation. In this paper, we examine which basic constructs are required to specify a user-defined routine entry/exit instrumentation. This analysis serves as a basis for a generic instrumentation component working on the source-code level where the instructions to be inserted can be flexibly configured. We evaluate the identified constructs with our prototypical implementation and show that these are sufficient to fulfill the needs of a number of todays’ performance-analysis tools.",
"title": ""
},
{
"docid": "39b4b7e77e357c9cc73038498f0f2cd1",
"text": "Traditional machine learning algorithms often fail to generalize to new input distributions, causing reduced accuracy. Domain adaptation attempts to compensate for the performance degradation by transferring and adapting source knowledge to target domain. Existing unsupervised methods project domains into a lower-dimensional space and attempt to align the subspace bases, effectively learning a mapping from source to target points or vice versa. However, they fail to take into account the difference of the two distributions in the subspaces, resulting in misalignment even after adaptation. We present a unified view of existing subspace mapping based methods and develop a generalized approach that also aligns the distributions as well as the subspace bases. Background. Domain adaptation, or covariate shift, is a fundamental problem in machine learning, and has attracted a lot of attention in the machine learning and computer vision community. Domain adaptation methods for visual data attempt to learn classifiers on a labeled source domain and transfer it to a target domain. There are two settings for visual domain adaptation: (1) unsupervised domain adaptation where there are no labeled examples available in the target domain; and (2) semisupervised domain adaptation where there are a few labeled examples in the target domain. Most existing algorithms operate in the semi-superised setting. However, in real world applications, unlabeled target data is often much more abundant and labeled examples are very limited, so the question of how to utilize the unlabeled target data is more important for practical visual domain adaptation. Thus, in this paper, we focus on the unsupervised scenario. Most of the existing unsupervised approaches have pursued adaptation by separately projecting the source and target data into a lowerdimensional manifold, and finding a transformation that brings the subspaces closer together. This process is illustrated in Figure 1. Geodesic methods [2, 3] find a path along the subspace manifold, and either project source and target onto points along that path [3], or find a closed-form linear map that projects source points to target [2]. Alternatively, the subspaces can be aligned by computing the linear map that minimizes the Frobenius norm of the difference between them, a method known as Subspace Alignment [1]. Approach. The intuition behind our approach is that although the existing approaches might align the subspaces (the bases of the subspaces), it might not fully align the data distributions in the subspaces as illustrated in Figure 1. We use the firstand second-order statistics, namely the mean and the variance, to describe a distribution in this paper. Since the mean after data preprocessing (i.e. normalization) is zero and is not affected 10 20 30 40 50 60 70 80 90 100 25 30 35 40 45 Subspace Dimension A c c u ra c y Mean Accuracy of k−NN with k=1 NA SA SDA−TS GFK SDA−IS 10 20 30 40 50 60 70 80 90 100 25 30 35 40 45 Subspace Dimension A c c u ra c y Mean Accuracy of k−NN with k=3 NA SA SDA−TS GFK SDA−IS Figure 2: Mean accuracy across all 12 experiment settings (domain shifts) of the k-NN Classifier on the Office-Caltech10 dataset. Both our methods SDA-IS and SDA-TS outperform GFK and SA consistently. Left: k-NN Classifier with k=1; Right: k-NN Classifier with k=3. 10 20 30 40 50 60 70 80 90 100 15 20 25 30 Subspace Dimension A c c u ra c y Mean Accuracy of k−NN with k=1",
"title": ""
},
{
"docid": "972abdbc8667c24ae080eb2ffb7835e9",
"text": "Two important cues to female physical attractiveness are body mass index (BMI) and shape. In front view, it seems that BMI may be more important than shape; however, is it true in profile where shape cues may be stronger? There is also the question of whether men and women have the same perception of female physical attractiveness. Some studies have suggested that they do not, but this runs contrary to mate selection theory. This predicts that women will have the same perception of female attractiveness as men do. This allows them to judge their own relative value, with respect to their peer group, and match this value with the value of a prospective mate. To clarify these issues we asked 40 male and 40 female undergraduates to rate a set of pictures of real women (50 in front-view and 50 in profile) for attractiveness. BMI was the primary predictor of attractiveness in both front and profile, and the putative visual cues to BMI showed a higher degree of view-invariance than shape cues such as the waist-hip ratio (WHR). Consistent with mate selection theory, there were no significant differences in the rating of attractiveness by male and female raters.",
"title": ""
},
{
"docid": "f6654502056cd7529bf5981ac472559f",
"text": "This work studies deep metric learning under small to medium scale as we believe that better generalization could be a contributing factor to the improvement of previous fine-grained image retrieval methods; it should be considered when designing future techniques. In particular, we investigate using other layers in a deep metric learning system (besides the embedding layer) for feature extraction and analyze how well they perform on training data and generalize to testing data. From this study, we suggest a new regularization practice where one can add or choose a more optimal layer for feature extraction. State-of-the-art performance is demonstrated on 3 fine-grained image retrieval benchmarks: Cars-196, CUB-200-2011, and Stanford Online Product.",
"title": ""
},
{
"docid": "2fc05946c4e17c0ca199cc8896e38362",
"text": "Hierarchical multilabel classification allows a sample to belong to multiple class labels residing on a hierarchy, which can be a tree or directed acyclic graph (DAG). However, popular hierarchical loss functions, such as the H-loss, can only be defined on tree hierarchies (but not on DAGs), and may also under- or over-penalize misclassifications near the bottom of the hierarchy. Besides, it has been relatively unexplored on how to make use of the loss functions in hierarchical multilabel classification. To overcome these deficiencies, we first propose hierarchical extensions of the Hamming loss and ranking loss which take the mistake at every node of the label hierarchy into consideration. Then, we first train a general learning model, which is independent of the loss function. Next, using Bayesian decision theory, we develop Bayes-optimal predictions that minimize the corresponding risks with the trained model. Computationally, instead of requiring an exhaustive summation and search for the optimal multilabel, the resultant optimization problem can be efficiently solved by a greedy algorithm. Experimental results on a number of real-world data sets show that the proposed Bayes-optimal classifier outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "49108ff6bdebfef7295d4dc3681897e8",
"text": "Recognition of materials has proven to be a challenging problem due to the wide variation in appearance within and between categories. Global image context, such as where the material is or what object it makes up, can be crucial to recognizing the material. Existing methods, however, operate on an implicit fusion of materials and context by using large receptive fields as input (i.e., large image patches). Many recent material recognition methods treat materials as yet another set of labels like objects. Materials are, however, fundamentally different from objects as they have no inherent shape or defined spatial extent. Approaches that ignore this can only take advantage of limited implicit context as it appears during training. We instead show that recognizing materials purely from their local appearance and integrating separately recognized global contextual cues including objects and places leads to superior dense, per-pixel, material recognition. We achieve this by training a fully-convolutional material recognition network end-toend with only material category supervision. We integrate object and place estimates to this network from independent CNNs. This approach avoids the necessity of preparing an impractically-large amount of training data to cover the product space of materials, objects, and scenes, while fully leveraging contextual cues for dense material recognition. Furthermore, we perform a detailed analysis of the effects of context granularity, spatial resolution, and the network level at which we introduce context. On a recently introduced comprehensive and diverse material database [14], we confirm that our method achieves state-of-the-art accuracy with significantly less training data compared to past methods.",
"title": ""
},
{
"docid": "313761d2cdb224253f87fe4b33977b85",
"text": "In this paper we described an authorship attribution system for Bengali blog texts. We have presented a new Bengali blog corpus of 3000 passages written by three authors. Our study proposes a text classification system, based on lexical features such as character bigrams and trigrams, word n-grams (n = 1, 2, 3) and stop words, using four classifiers. We achieve best results (more than 99%) on the held-out dataset using Multi layered Perceptrons (MLP) amongst the four classifiers, which indicates MLP can produce very good results for big data sets and lexical n-gram based features can be the best features for any authorship attribution system.",
"title": ""
},
{
"docid": "df70cb4b1d37680cccb7d79bdea5d13b",
"text": "In this paper, we describe a system for automatic construction of user disease progression timelines from their posts in online support groups using minimal supervision. In recent years, several online support groups have been established which has led to a huge increase in the amount of patient-authored text available. Creating systems which can automatically extract important medical events and create disease progression timelines for users from such text can help in patient health monitoring as well as studying links between medical events and users’ participation in support groups. Prior work in this domain has used manually constructed keyword sets to detect medical events. In this work, our aim is to perform medical event detection using minimal supervision in order to develop a more general timeline construction system. Our system achieves an accuracy of 55.17%, which is 92% of the performance achieved by a supervised baseline system.",
"title": ""
},
{
"docid": "f56e465f5f45388e5f439d03bf5ec391",
"text": "In this article, the authors evaluate L. Kohlberg's (1984) cognitive- developmental approach to morality, find it wanting, and introduce a more pragmatic approach. They review research designed to evaluate Kohlberg's model, describe how they revised the model to accommodate discrepant findings, and explain why they concluded that it is poorly equipped to account for the ways in which people make moral decisions in their everyday lives. The authors outline in 11 propositions a framework for a new approach that is more attentive to the purposes that people use morality to achieve. People make moral judgments and engage in moral behaviors to induce themselves and others to uphold systems of cooperative exchange that help them achieve their goals and advance their interests.",
"title": ""
},
{
"docid": "9c71ba4c4b692adaf0b147598adcc1d2",
"text": "Probabilistic programming languages are used for developing statistical models. They typically consist of two components: a specification of a stochastic process (the prior) and a specification of observations that restrict the probability space to a conditional subspace (the posterior). Use cases of such formalisms include the development of algorithms in machine learning and artificial intelligence.\n In this article, we establish a probabilistic-programming extension of Datalog that, on the one hand, allows for defining a rich family of statistical models, and on the other hand retains the fundamental properties of declarativity. Our proposed extension provides mechanisms to include common numerical probability functions; in particular, conclusions of rules may contain values drawn from such functions. The semantics of a program is a probability distribution over the possible outcomes of the input database with respect to the program. Observations are naturally incorporated by means of integrity constraints over the extensional and intensional relations. The resulting semantics is robust under different chases and invariant to rewritings that preserve logical equivalence.",
"title": ""
},
{
"docid": "41defd4d4926625cdb617e8482bf3177",
"text": "Common perception regards the nucleus as a densely packed object with higher refractive index (RI) and mass density than the surrounding cytoplasm. Here, the volume of isolated nuclei is systematically varied by electrostatic and osmotic conditions as well as drug treatments that modify chromatin conformation. The refractive index and dry mass of isolated nuclei is derived from quantitative phase measurements using digital holographic microscopy (DHM). Surprisingly, the cell nucleus is found to have a lower RI and mass density than the cytoplasm in four different cell lines and throughout the cell cycle. This result has important implications for conceptualizing light tissue interactions as well as biological processes in cells.",
"title": ""
},
{
"docid": "10436030a178ee21d1b1753ecf8437c7",
"text": "Recently, teaching evaluation is defined the main part of quality in education. The students normally make answers on questionnaire that are divided into types; close-end question and open-end question. The close-end question is simple answer as multi-choices that are easily processed by statistical evaluation. On the other hand, open-end question gives the person answering in phrases or statements that are recommended their teacher. The problem is mostly LMS ignored these open-end questions to overall analysis. Therefore, analysis and processing of these open-end questions are very importance and determined teaching. This research presents analysis model for teaching evaluation from answering and posting a comment to discussion in form of open-end question obtained from moodle LMS forum using data mining techniques. The techniques extract classification of attitudes that are defined positive and negative attitude from students to instructor for improvement of learning and teaching. These classification models are compared three algorithms; ID3, BFTree and Naïve Bayes. The experimental results, the decision tree is achieved correctly classifier 80% compared with others.",
"title": ""
},
{
"docid": "36a42101afca653bb0252be3bc275c28",
"text": "Virtual reality (VR) technology offers new opportunities for the development of innovative neuropsychological assessment and rehabilitation tools. VR-based testing and training scenarios that would be difficult, if not impossible, to deliver using conventional neuropsychological methods are now being developed that take advantage of the assets available with VR technology. If empirical studies continue to demonstrate effectiveness, virtual environment applications could provide new options for targeting cognitive and functional impairments due to traumatic brain injury, neurological disorders, and learning disabilities. This article focuses on specifying the assets that are available with VR for neuropsychological applications along with discussion of current VR-based research that serves to illustrate each asset. VR allows for the precise presentation and control of dynamic multi-sensory 3D stimulus environments, as well as providing advanced methods for recording behavioural responses. This serves as the basis for a diverse set of VR assets for neuropsychological approaches that are detailed in this article. We take the position that when combining these assets within the context of functionally relevant, ecologically valid virtual environments, fundamental advancements can emerge in how human cognition and functional behaviour is assessed and rehabilitated.",
"title": ""
},
{
"docid": "5c96222feacb0454d353dcaa1f70fb83",
"text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1",
"title": ""
},
{
"docid": "abda350daca4705e661d8e59a6946e08",
"text": "Concept definition is important in language understanding (LU) adaptation since literal definition difference can easily lead to data sparsity even if different data sets are actually semantically correlated. To address this issue, in this paper, a novel concept transfer learning approach is proposed. Here, substructures within literal concept definition are investigated to reveal the relationship between concepts. A hierarchical semantic representation for concepts is proposed, where a semantic slot is represented as a composition of atomic concepts. Based on this new hierarchical representation, transfer learning approaches are developed for adaptive LU. The approaches are applied to two tasks: value set mismatch and domain adaptation, and evaluated on two LU benchmarks: ATIS and DSTC 2&3. Thorough empirical studies validate both the efficiency and effectiveness of the proposed method. In particular, we achieve state-ofthe-art performance (F1-score 96.08%) on ATIS by only using lexicon features.",
"title": ""
},
{
"docid": "4db8ae1dc1f340a4c7d9fcd90fb576b7",
"text": "Implementation of digital modulators on Field Programmable Gate Array (FPGA) is a research area that has received great attention recently. Most of the research has focused on the implementation of simple digital modulators on FPGAs such as Amplitude Shift Keying (ASK), Frequency Shift Keying (FSK), and Phase Shift Keying (PSK). This paper presented a novel method of implementing Quadrature Phase Shift Keying (QPSK) along with Binary PSK (BPSK) using accumulators with a reverse addressing technique. The implementation of the BPSK modulator required two sinusoidal signals with 180-degree phase shift. The first signal was obtained using Look Up Table(LUT) based on Direct Digital Synthesizer (DDS) technique. The second signal was obtained by using the same LUT but after inverting the most significant bit of the accumulator to get the out of phase signal. For the QPSK modulator, four sinusoidal waves were needed. Using only one LUT, these waves were obtained. The first two wave were generated by using two accumulators working on the rising edge and the falling edge of a perfect twice frequency square wave clock which results in a 90-degree phase shift. The other waves were obtained from the same accumulators after reversing the most significant bit in each one. The implementation of the entire systems was done in the Very high speed integrated circuit Hardware Description Language (VHDL) without the help of Xilinx System Generator or DSP Builder tools as many papers did.",
"title": ""
}
] |
scidocsrr
|
1405deeaab0e14b68fd7ed6bd16445fb
|
Efficient SVM Regression Training with SMO
|
[
{
"docid": "e494f926c9b2866d2c74032d200e4d0a",
"text": "This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm.",
"title": ""
}
] |
[
{
"docid": "46360fec3d7fa0adbe08bb4b5bb05847",
"text": "Previous approaches to action recognition with deep features tend to process video frames only within a small temporal region, and do not model long-range dynamic information explicitly. However, such information is important for the accurate recognition of actions, especially for the discrimination of complex activities that share sub-actions, and when dealing with untrimmed videos. Here, we propose a representation, VLAD for Deep Dynamics (VLAD3), that accounts for different levels of video dynamics. It captures short-term dynamics with deep convolutional neural network features, relying on linear dynamic systems (LDS) to model medium-range dynamics. To account for long-range inhomogeneous dynamics, a VLAD descriptor is derived for the LDS and pooled over the whole video, to arrive at the final VLAD3 representation. An extensive evaluation was performed on Olympic Sports, UCF101 and THUMOS15, where the use of the VLAD3 representation leads to state-of-the-art results.",
"title": ""
},
{
"docid": "7359e387937ce66ce8565237cbf4f1b0",
"text": "A new design of stripline transition structures and flip-chip interconnects for high-speed digital communication systems implemented in low-temperature cofired ceramic (LTCC) substrates is presented. Simplified fabrication, suitability for LTCC machining, suitability for integration with other components, and connection to integrated stripline or microstrip interconnects for LTCC multichip modules and system on package make this approach well suited for miniaturized, advanced broadband, and highly integrated multichip ceramic modules. The transition provides excellent signal integrity at high-speed digital data rates up to 28 Gbits/s. Full-wave simulations and experimental results demonstrate a cost-effective solution for a wide frequency range from dc to 30 GHz and beyond. Signal integrity and high-speed digital data rate performances are verified through eye diagram and time-domain reflectometry and time-domain transmissometry measurements over a 10-cm long stripline.",
"title": ""
},
{
"docid": "d53d3b562c95f3a8e0a0b6f2c60d99e2",
"text": "A capacitive level-shifter as a part of a high voltage halfbridge gate driver is presented in this work. The level-shifter utilizes a differential capacitor pair to transfer the information from low side to high side. A thorough evaluation of the critical parts of the level-shifter is presented with focus on low power consumption as well as low capacitive load between the floating half-bridge node and ground (output capacitance). The operation of the level-shifter is tested and verified by measurements on a prototype half-bridge gate driver. Results conclude stabile operation at 2.44kV, 50kHz with a current consumption of 0.5mA. Operation voltage was limited by test equipment. The output capacitance is [email protected].",
"title": ""
},
{
"docid": "614b45e8802497bdd61df63a9745c115",
"text": "Wireless sensor networks have potential to monitor environments for both military and civil applications. Due to inhospitable conditions these sensors are not always deployed uniformly in the area of interest. Since sensors are generally constrained in on-board energy supply, efficient management of the network is crucial to extend the life of the sensors. Sensors energy cannot support long haul communication to reach a remote command site and thus requires many levels of hops or a gateway to forward the data on behalf of the sensor. In this paper we propose an algorithm to network these sensors in to well define clusters with less-energy-constrained gateway nodes acting as clusterheads, and balance load among these gateways. Simulation results show how our approach can balance the load and improve the lifetime of the system.",
"title": ""
},
{
"docid": "a30c2a8d3db81ae121e62af5994d3128",
"text": "Recent advances in the fields of robotics, cyborg development, moral psychology, trust, multi agent-based systems and socionics have raised the need for a better understanding of ethics, moral reasoning, judgment and decision-making within the system of man and machines. Here we seek to understand key research questions concerning the interplay of ethical trust at the individual level and the social moral norms at the collective end. We review salient works in the fields of trust and machine ethics research, underscore the importance and the need for a deeper understanding of ethical trust at the individual level and the development of collective social moral norms. Drawing upon the recent findings from neural sciences on mirror-neuron system (MNS) and social cognition, we present a bio-inspired Computational Model of Ethical Trust (CMET) to allow investigations of the interplay of ethical trust and social moral norms.",
"title": ""
},
{
"docid": "873c2e7774791417d6cb4f5904cde74c",
"text": "This article discusses empirical findings and conceptual elaborations of the last 10 years in strategic niche management research (SNM). The SNM approach suggests that sustainable innovation journeys can be facilitated by creating technological niches, i.e. protected spaces that allow the experimentation with the co-evolution of technology, user practices, and regulatory structures. The assumption was that if such niches were constructed appropriately, they would act as building blocks for broader societal changes towards sustainable development. The article shows how concepts and ideas have evolved over time and new complexities were introduced. Research focused on the role of various niche-internal processes such as learning, networking, visioning and the relationship between local projects and global rule sets that guide actor behaviour. The empirical findings showed that the analysis of these niche-internal dimensions needed to be complemented with attention to niche external processes. In this respect, the multi-level perspective proved useful for contextualising SNM. This contextualisation led to modifications in claims about the dynamics of sustainable innovation journeys. Niches are to be perceived as crucial for bringing about regime shifts, but they cannot do this on their own. Linkages with ongoing external processes are also important. Although substantial insights have been gained, the SNM approach is still an unfinished research programme. We identify various promising research directions, as well as policy implications.",
"title": ""
},
{
"docid": "57cb465ba54502fd5685f37b37812d71",
"text": "Solving logistic regression with L1-regularization in distributed settings is an important problem. This problem arises when training dataset is very large and cannot fit the memory of a single machine. We present d-GLMNET, a new algorithm solving logistic regression with L1-regularization in the distributed settings. We empirically show that it is superior over distributed online learning via truncated gradient.",
"title": ""
},
{
"docid": "ef74392a9681d16b14970740cbf85191",
"text": "We propose an efficient physics-based method for dexterous ‘real hand’ - ‘virtual object’ interaction in Virtual Reality environments. Our method is based on the Coulomb friction model, and we show how to efficiently implement it in a commodity VR engine for realtime performance. This model enables very convincing simulations of many types of actions such as pushing, pulling, grasping, or even dexterous manipulations such as spinning objects between fingers without restrictions on the objects' shapes or hand poses. Because it is an analytic model, we do not require any prerecorded data, in contrast to previous methods. For the evaluation of our method, we conduction a pilot study that shows that our method is perceived more realistic and natural, and allows for more diverse interactions. Further, we evaluate the computational complexity of our method to show real-time performance in VR environments.",
"title": ""
},
{
"docid": "1f9b121a75f4ab1169e083054365469e",
"text": "In this paper, we present Studierstube ES, a framework for the development of handheld Augmented Reality. The applications run self-contained on handheld computers and smartphones with Windows CE. A detailed description of the performance critical tracking and rendering components are given. We also report on the implementation of a client-server architecture for multi-user applications, and a game engine for location based museum games that has been built on top of this infrastructure. Details on two games that were created, permanently deployed and evaluated in two Austrian museums illustrate the practical value of the framework and lessons learned from using it.",
"title": ""
},
{
"docid": "7d3b8f381710cb196ba126f2b1942d57",
"text": "Radar devices can be used in nonintrusive situations to monitor vital sign, through clothes or behind walls. By detecting and extracting body motion linked to physiological activity, accurate simultaneous estimations of both heart rate (HR) and respiration rate (RR) is possible. However, most research to date has focused on front monitoring of superficial motion of the chest. In this paper, body penetration of electromagnetic (EM) wave is investigated to perform back monitoring of human subjects. Using body-coupled antennas and an ultra-wideband (UWB) pulsed radar, in-body monitoring of lungs and heart motion was achieved. An optimised location of measurement in the back of a subject is presented, to enhance signal-to-noise ratio and limit attenuation of reflected radar signals. Phase-based detection techniques are then investigated for back measurements of vital sign, in conjunction with frequency estimation methods that reduce the impact of parasite signals. Finally, an algorithm combining these techniques is presented to allow robust and real-time estimation of both HR and RR. Static and dynamic tests were conducted, and demonstrated the possibility of using this sensor in future health monitoring systems, especially in the form of a smart car seat for driver monitoring.",
"title": ""
},
{
"docid": "b07e438c8bd71765373341c3bf1f9088",
"text": "Procrastination is a common behavior, mainly in school settings. Only a few studies have analyzed the associations of academic procrastination with students' personal and family variables. In the present work, we analyzed the impact of socio-personal variables (e.g., parents' education, number of siblings, school grade level, and underachievement) on students' academic procrastination profiles. Two independent samples of 580 and 809 seventh to ninth graders, students attending the last three years of Portuguese Compulsory Education, have been taken. The findings, similar in both studies, reveal that procrastination decreases when the parents' education is higher, but it increases along with the number of siblings, the grade level, and the underachievement. The results are discussed in view of the findings of previous research. The implications for educational practice are also analyzed.",
"title": ""
},
{
"docid": "3767702e22ac34493bb1c6c2513da9f7",
"text": "The majority of the online reviews are written in free-text format. It is often useful to have a measure which summarizes the content of the review. One such measure can be sentiment which expresses the polarity (positive/negative) of the review. However, a more granular classification of sentiment, such as rating stars, would be more advantageous and would help the user form a better opinion. In this project, we propose an approach which involves a combination of topic modeling and sentiment analysis to achieve this objective and thereby help predict the rating stars.",
"title": ""
},
{
"docid": "463c1df3306820f92be1566c03a2b0f9",
"text": "Precision and planning are key to reconstructive surgery. Augmented reality (AR) can bring the information within preoperative computed tomography angiography (CTA) imaging to life, allowing the surgeon to 'see through' the patient's skin and appreciate the underlying anatomy without making a single incision. This work has demonstrated that AR can assist the accurate identification, dissection and execution of vascular pedunculated flaps during reconstructive surgery. Separate volumes of osseous, vascular, skin, soft tissue structures and relevant vascular perforators were delineated from preoperative CTA scans to generate three-dimensional images using two complementary segmentation software packages. These were converted to polygonal models and rendered by means of a custom application within the HoloLens™ stereo head-mounted display. Intraoperatively, the models were registered manually to their respective subjects by the operating surgeon using a combination of tracked hand gestures and voice commands; AR was used to aid navigation and accurate dissection. Identification of the subsurface location of vascular perforators through AR overlay was compared to the positions obtained by audible Doppler ultrasound. Through a preliminary HoloLens-assisted case series, the operating surgeon was able to demonstrate precise and efficient localisation of perforating vessels.",
"title": ""
},
{
"docid": "2fd708b638a6562b5b5c1cf2f9b156a5",
"text": "A main aspect of the Android platform is Inter-Application Communication (IAC), which enables reuse of functionality across apps and app components via message passing. While a powerful feature, IAC also constitutes a serious attack surface. A malicious app can embed a payload into an IAC message, thereby driving the recipient app into a potentially vulnerable behavior if the message is processed without its fields first being sanitized or validated. We present what to our knowledge is the first comprehensive testing algorithm for Android IAC vulnerabilities. Toward this end, we first describe a catalog, stemming from our field experience, of 8 concrete vulnerability types that can potentially arise due to unsafe handling of incoming IAC messages. We then explain the main challenges that automated discovery of Android IAC vulnerabilities entails, including in particular path coverage and custom data fields, and present simple yet surprisingly effective solutions to these challenges. We have realized our testing approach as the IntentDroid system, which is available as a commercial cloud service. IntentDroid utilizes lightweight platform-level instrumentation, implemented via debug breakpoints (to run atop any Android device without any setup or customization), to recover IAC-relevant app-level behaviors. Evaluation of IntentDroid over a set of 80 top-popular apps has revealed a total 150 IAC vulnerabilities — some already fixed by the developers following our report — with a recall rate of 92% w.r.t. a ground truth established via manual auditing by a security expert.",
"title": ""
},
{
"docid": "6b911507bdc5b051a61bde272c2dc4d5",
"text": "Applications such as human–computer interaction, surveillance, biometrics and intelligent marketing would benefit greatly from knowledge of the attributes of the human subjects under scrutiny. The gender of a person is one such significant demographic attribute. This paper provides a review of facial gender recognition in computer vision. It is certainly not a trivial task to identify gender from images of the face. We highlight the challenges involved, which can be divided into human factors and those introduced during the image capture process. A comprehensive survey of facial feature extraction methods for gender recognition studied in the past couple of decades is provided. We appraise the datasets used for evaluation of gender classification performance. Based on the results reported, good performance has been achieved for images captured under controlled environments, but certainly there is still much work that can be done to improve the robustness of gender recognition under real-life environments.",
"title": ""
},
{
"docid": "134cde769a3faeeac80746b85313bd0b",
"text": "Adrenocortical carcinoma (ACC) in pediatric and adolescent patients is rare, and it is associated with various clinical symptoms. We introduce the case of an 8-year-old boy with ACC who presented with peripheral precocious puberty at his first visit. He displayed penis enlargement with pubic hair and facial acne. His serum adrenal androgen levels were elevated, and abdominal computed tomography revealed a right suprarenal mass. After complete surgical resection, the histological diagnosis was ACC. Two months after surgical removal of the mass, he subsequently developed central precocious puberty. He was treated with a gonadotropin-releasing hormone agonist to delay further pubertal progression. In patients with functioning ACC and surgical removal, clinical follow-up and hormonal marker examination for the secondary effects of excessive hormone secretion may be a useful option at least every 2 or 3 months after surgery.",
"title": ""
},
{
"docid": "60de343325a305b08dfa46336f2617b5",
"text": "On Friday, May 12, 2017 a large cyber-attack was launched using WannaCry (or WannaCrypt). In a few days, this ransomware virus targeting Microsoft Windows systems infected more than 230,000 computers in 150 countries. Once activated, the virus demanded ransom payments in order to unlock the infected system. The widespread attack affected endless sectors – energy, transportation, shipping, telecommunications, and of course health care. Britain’s National Health Service (NHS) reported that computers, MRI scanners, blood-storage refrigerators and operating room equipment may have all been impacted. Patient care was reportedly hindered and at the height of the attack, NHS was unable to care for non-critical emergencies and resorted to diversion of care from impacted facilities. While daunting to recover from, the entire situation was entirely preventable. A Bcritical^ patch had been released by Microsoft on March 14, 2017. Once applied, this patch removed any vulnerability to the virus. However, hundreds of organizations running thousands of systems had failed to apply the patch in the first 59 days it had been released. This entire situation highlights a critical need to reexamine how we maintain our health information systems. Equally important is a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized. For example, in 2016, the NHS was reported to have thousands of computers still running Windows XP – a version no longer supported or maintained by Microsoft. There is no question that this will happen again. However, health organizations can mitigate future risk by ensuring best security practices are adhered to.",
"title": ""
},
{
"docid": "63f0ff6663f334e1ab05d0ce5d2239cf",
"text": "Railroad tracks need to be periodically inspected and monitored to ensure safe transportation. Automated track inspection using computer vision and pattern recognition methods has recently shown the potential to improve safety by allowing for more frequent inspections while reducing human errors. Achieving full automation is still very challenging due to the number of different possible failure modes, as well as the broad range of image variations that can potentially trigger false alarms. In addition, the number of defective components is very small, so not many training examples are available for the machine to learn a robust anomaly detector. In this paper, we show that detection performance can be improved by combining multiple detectors within a multitask learning framework. We show that this approach results in improved accuracy for detecting defects on railway ties and fasteners.",
"title": ""
},
{
"docid": "f83d8a69a4078baf4048b207324e505f",
"text": "Low-dose computed tomography (LDCT) has attracted major attention in the medical imaging field, since CT-associated X-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in LDCT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN) architectures. This paper introduces a conveying path-based convolutional encoder-decoder (CPCE) network in 2-D and 3-D configurations within the GAN framework for LDCT denoising. A novel feature of this approach is that an initial 3-D CPCE denoising model can be directly obtained by extending a trained 2-D CNN, which is then fine-tuned to incorporate 3-D spatial information from adjacent slices. Based on the transfer learning from 2-D to 3-D, the 3-D network converges faster and achieves a better denoising performance when compared with a training from scratch. By comparing the CPCE network with recently published work based on the simulated Mayo data set and the real MGH data set, we demonstrate that the 3-D CPCE denoising model has a better performance in that it suppresses image noise and preserves subtle structures.",
"title": ""
},
{
"docid": "08a72844e8a974505b28527ee2fa3ee0",
"text": "Perfidy is the impersonation of civilians during armed conflict. It is generally outlawed by the laws of war such as the Geneva Conventions as its practice makes wars more dangerous for civilians. Cyber perfidy can be defined as malicious software or hardware masquerading as ordinary civilian software or hardware. We argue that it is also banned by the laws of war in cases where such cyber infrastructure is essential to normal civilian activity. This includes tampering with critical parts of operating systems and security software. We discuss possible targets of cyber perfidy, possible objections to the notion, and possible steps towards international agreements about it. This paper appeared in the Routledge Handbook of War and Ethics as chapter 29, ed. N. Evans, 2013.",
"title": ""
}
] |
scidocsrr
|
f126b61049bbb51f626739997889d900
|
Investigating users' query formulations for cognitive search intents
|
[
{
"docid": "b585947e882fca6f07b65dc940cc819f",
"text": "One way to help all users of commercial Web search engines be more successful in their searches is to better understand what those users with greater search expertise are doing, and use this knowledge to benefit everyone. In this paper we study the interaction logs of advanced search engine users (and those not so advanced) to better understand how these user groups search. The results show that there are marked differences in the queries, result clicks, post-query browsing, and search success of users we classify as advanced (based on their use of query operators), relative to those classified as non-advanced. Our findings have implications for how advanced users should be supported during their searches, and how their interactions could be used to help searchers of all experience levels find more relevant information and learn improved searching strategies.",
"title": ""
}
] |
[
{
"docid": "55dd9bf3372b1ae383d43664d60e9da8",
"text": "In this report, we consider the task of automated assessment of English as a Second Language (ESOL) examination scripts written in response to prompts eliciting free text answers. We review and critically evaluate previous work on automated assessment for essays, especially when applied to ESOL text. We formally define the task as discriminative preference ranking and develop a new system trained and tested on a corpus of manually-graded scripts. We show experimentally that our best performing system is very close to the upper bound for the task, as defined by the agreement between human examiners on the same corpus. Finally we argue that our approach, unlike extant solutions, is relatively prompt-insensitive and resistant to subversion, even when its operating principles are in the public domain. These properties make our approach significantly more viable for high-stakes assessment.",
"title": ""
},
{
"docid": "7aeb10faf8590ed9f4054bafcd4dee0c",
"text": "Concept, design, and measurement results of a frequency-modulated continuous-wave radar sensor in low-temperature co-fired ceramics (LTCC) technology is presented in this paper. The sensor operates in the frequency band between 77–81 GHz. As a key component of the system, wideband microstrip grid array antennas with a broadside beam are presented and discussed. The combination with a highly integrated feeding network and a four-channel transceiver chip based on SiGe technology results in a very compact LTCC RF frontend (23 mm <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\times$</tex></formula> 23 mm). To verify the feasibility of the concept, first radar measurement results are presented.",
"title": ""
},
{
"docid": "dcfb5ebabf07e87843668338d8d9927a",
"text": "Click Fraud Bots pose a significant threat to the online economy. To-date efforts to filter bots have been geared towards identifiable useragent strings, as epitomized by the IAB's Robots and Spiders list. However bots designed to perpetrate malicious activity or fraud, are designed to avoid detection with these kinds of lists, and many use very sophisticated schemes for cloaking their activities. In order to combat this emerging threat, we propose the creation of Bot Signatures for training and evaluation of candidate Click Fraud Detection Systems. Bot signatures comprise keyed records connected to case examples. We demonstrate the technique by developing 8 simulated examples of Bots described in the literature including Click Bot A.",
"title": ""
},
{
"docid": "1159d83815e18d7822b8eb39c50e438d",
"text": "Imbalanced time series classification (TSC) involving many real-world applications has increasingly captured attention of researchers. Previous work has proposed an intelligent-structure preserving over-sampling method (SPO), which the authors claimed achieved better performance than other existing over-sampling and state-of-the-art methods in TSC. The main disadvantage of over-sampling methods is that they significantly increase the computational cost of training a classification model due to the addition of new minority class instances to balance data-sets with high dimensional features. These challenging issues have motivated us to find a simple and efficient solution for imbalanced TSC. Statistical tests are applied to validate our conclusions. The experimental results demonstrate that this proposed simple random under-sampling technique with SVM is efficient and can achieve results that compare favorably with the existing complicated SPO method for imbalanced TSC.",
"title": ""
},
{
"docid": "71b5a4d02be14868302f1b60d0a26484",
"text": "In cloud computing, data owners host their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, this new paradigm of data hosting service also introduces new security challenges, which requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking methods can only serve for static archive data and, thus, cannot be applied to the auditing service since the data in the cloud can be dynamically updated. Thus, an efficient and secure dynamic auditing protocol is desired to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems and propose an efficient and privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient and provably secure in the random oracle model. We further extend our auditing protocol to support batch auditing for both multiple owners and multiple clouds, without using any trusted organizer. The analysis and simulation results show that our proposed auditing protocols are secure and efficient, especially it reduce the computation cost of the auditor.",
"title": ""
},
{
"docid": "d4a9ebafbc8f35380ab2b3bbbefd5583",
"text": "We present a GPU implementation of LAMMPS, a widely-used parallel molecular dynamics (MD) software package, and show 5x to 13x single node speedups versus the CPU-only version of LAMMPS. This new CUDA package for LAMMPS also enables multi-GPU simulation on hybrid heterogeneous clusters, using MPI for inter-node communication, CUDA kernels on the GPU for all methods working with particle data, and standard LAMMPS C++ code for CPU execution. Cell and neighbor list approaches are compared for best performance on GPUs, with thread-peratom and block-per-atom neighbor list variants showing best performance at low and high neighbor counts, respectively. Computational performance results of GPU-enabled LAMMPS are presented for a variety of materials classes (e.g. biomolecules, polymers, metals, semiconductors), along with a speed comparison versus other available GPU-enabled MD software. Finally, we show strong and weak scaling performance on a CPU/GPU cluster using up to 128 dual GPU nodes.",
"title": ""
},
{
"docid": "fd0a441610f5aef8aa29edd469dcf88a",
"text": "We treat with tools from convex analysis the general problem of cutting planes, separating a point from a (closed convex) set P . Crucial for this is the computation of extreme points in the so-called reverse polar set, introduced by E. Balas in 1979. In the polyhedral case, this enables the computation of cuts that define facets of P . We exhibit three (equivalent) optimization problems to compute such extreme points; one of them corresponds to selecting a specific normalization to generate cuts. We apply the above development to the case where P is (the closed convex hull of) a union, and more particularly a union of polyhedra (case of disjunctive cuts). We conclude with some considerations on the design of efficient cut generators. The paper also contains an appendix, reviewing some fundamental concepts of convex analysis.",
"title": ""
},
{
"docid": "b5c8d34b75dbbfdeb666fd76ef524be7",
"text": "Systematic Literature Reviews (SLR) may not provide insight into the \"state of the practice\" in SE, as they do not typically include the \"grey\" (non-published) literature. A Multivocal Literature Review (MLR) is a form of a SLR which includes grey literature in addition to the published (formal) literature. Only a few MLRs have been published in SE so far. We aim at raising the awareness for MLRs in SE by addressing two research questions (RQs): (1) What types of knowledge are missed when a SLR does not include the multivocal literature in a SE field? and (2) What do we, as a community, gain when we include the multivocal literature and conduct MLRs? To answer these RQs, we sample a few example SLRs and MLRs and identify the missing and the gained knowledge due to excluding or including the grey literature. We find that (1) grey literature can give substantial benefits in certain areas of SE, and that (2) the inclusion of grey literature brings forward certain challenges as evidence in them is often experience and opinion based. Given these conflicting viewpoints, the authors are planning to prepare systematic guidelines for performing MLRs in SE.",
"title": ""
},
{
"docid": "c443ca07add67d6fc0c4901e407c68f2",
"text": "This paper proposes a compiler-based programming framework that automatically translates user-written structured grid code into scalable parallel implementation code for GPU-equipped clusters. To enable such automatic translations, we design a small set of declarative constructs that allow the user to express stencil computations in a portable and implicitly parallel manner. Our framework translates the user-written code into actual implementation code in CUDA for GPU acceleration and MPI for node-level parallelization with automatic optimizations such as computation and communication overlapping. We demonstrate the feasibility of such automatic translations by implementing several structured grid applications in our framework. Experimental results on the TSUBAME2.0 GPU-based supercomputer show that the performance is comparable as hand-written code and good strong and weak scalability up to 256 GPUs.",
"title": ""
},
{
"docid": "d7f5449cf398b56a29c64adada7cf7d2",
"text": "Review The Prefrontal Cortex—An Update: Time Is of the Essence many of the principles discussed below apply also to the PFC of nonprimate species. Anatomy and Connections The PFC is the association cortex of the frontal lobe. In Los Angeles, California 90095 primates, it comprises areas 8–13, 24, 32, 46, and 47 according to the cytoarchitectonic map of Brodmann The physiology of the cerebral cortex is organized in (1909), recently updated for the monkey by Petrides and hierarchical manner. At the bottom of the cortical organi-Pandya (Figure 1). Phylogenetically, it is one of the latest zation, sensory and motor areas support specific sen-cortices to develop, having attained maximum relative sory and motor functions. Progressively higher areas—of growth in the human brain (Brodmann, 1912; Jerison, later phylogenetic and ontogenetic development—support 1994), where it constitutes nearly one-third of the neocor-functions that are progressively more integrative. The tex. Furthermore, the PFC undergoes late development in prefrontal cortex (PFC) constitutes the highest level of the course of ontogeny. In the human, by myelogenic and the cortical hierarchy dedicated to the representation synaptogenic criteria, the PFC is clearly late-maturing and execution of actions. The PFC can be subdivided in three major regions: Huttenlocher and Dabholkar, 1997). In the monkey's orbital, medial, and lateral. The orbital and medial re-PFC, myelogenesis also seems to develop late (Gibson, gions are involved in emotional behavior. The lateral 1991). However, the assumption that the synaptic struc-region, which is maximally developed in the human, pro-ture of the PFC lags behind that of other neocortical vides the cognitive support to the temporal organization areas has been challenged with morphometric data of behavior, speech, and reasoning. This function of (Bourgeois et al., 1994). In any case, imaging studies temporal organization is served by several subordinate indicate that, in the human, prefrontal areas do not attain functions that are closely intertwined (e.g., temporal in-full maturity until adolescence (Chugani et al., 1987; tegration, working memory, set). Whatever areal special-Paus et al., 1999; Sowell et al., 1999). This conclusion ization can be discerned in the PFC is not so much is consistent with the behavioral evidence that these attributable to the topographical distribution of those areas are critical for those higher cognitive functions functions as to the nature of the cognitive information that develop late, such as propositional speech and with which they operate. Much of the prevalent confu-reasoning. sion in the PFC literature derives from …",
"title": ""
},
{
"docid": "0389a49d23b72bf29c0a186de9566939",
"text": "IEEE 1451 has been around for almost 20 years and in that time it has seen many changes in the world of smart sensors. One of the most distinct paradigms to arise was the Internet-of-Things and with it, the popularity of light-weight and simple to implement communication protocols. One of these protocols in particular, MQ Telemetry Transport has become synonymous with large cloud service providers such as Amazon Web Services, IBM Watson, and Microsoft Azure, along with countless other services. While MQTT had be traditionally used in controlled networks within server centers, the simplicity of the protocol has caused it to be utilized on the open internet. Now being called the language of the IoT, it seems obvious that any standard that is aiming to bring a common network service layer to the IoT architecture should be able to utilize MQTT. This paper proposes potential methodologies to extend the Common Architectures and Network services found in the IEEE 1451 Family of Standard into applications which utilize MQTT.",
"title": ""
},
{
"docid": "70c82bb98d0e558280973d67429cea8a",
"text": "We present an algorithm for separating the local gradient information and Lambertian color by using 4-source color photometric stereo in the presence of highlights and shadows. We assume that the surface reflectance can be approximated by the sum of a Lambertian and a specular component. The conventional photometric method is generalized for color images. Shadows and highlights in the input images are detected using either spectral or directional cues and excluded from the recovery process, thus giving more reliable estimates of local surface parameters.",
"title": ""
},
{
"docid": "e9229d3ab3e9ec7e5020e50ca23ada0b",
"text": "Human beings have been recently reviewed as ‘metaorganisms’ as a result of a close symbiotic relationship with the intestinal microbiota. This assumption imposes a more holistic view of the ageing process where dynamics of the interaction between environment, intestinal microbiota and host must be taken into consideration. Age-related physiological changes in the gastrointestinal tract, as well as modification in lifestyle, nutritional behaviour, and functionality of the host immune system, inevitably affect the gut microbial ecosystem. Here we review the current knowledge of the changes occurring in the gut microbiota of old people, especially in the light of the most recent applications of the modern molecular characterisation techniques. The hypothetical involvement of the age-related gut microbiota unbalances in the inflamm-aging, and immunosenescence processes will also be discussed. Increasing evidence of the importance of the gut microbiota homeostasis for the host health has led to the consideration of medical/nutritional applications of this knowledge through the development of probiotic and prebiotic preparations specific for the aged population. The results of the few intervention trials reporting the use of pro/prebiotics in clinical conditions typical of the elderly will be critically reviewed.",
"title": ""
},
{
"docid": "fce6ac500501d0096aac3513639c2627",
"text": "Recent technological advances made necessary the use of the robots in various types of applications. Currently, the traditional robot-like scenarios dedicated to industrial applications with repetitive tasks, were replaced by applications which require human interaction. The main field of such applications concerns the rehabilitation and aid of elderly persons. In this study, we present a state-of-the-art of the main research advances in lower limbs actuated orthosis/wearable robots in the literature. This will include a review on researches covering full limb exoskeletons, lower limb exoskeletons and particularly the knee joint orthosis. Rehabilitation using treadmill based device and use of Functional Electrical Stimulation (FES) are also investigated. We discuss finally the challenges not yet solved such as issues related to portability, energy consumption, social constraints and high costs of theses devices.",
"title": ""
},
{
"docid": "6c730f32b02ca58f66e98f9fc5181484",
"text": "When analyzing a visualized network, users need to explore different sections of the network to gain insight. However, effective exploration of large networks is often a challenge. While various tools are available for users to explore the global and local features of a network, these tools usually require significant interaction activities, such as repetitive navigation actions to follow network nodes and edges. In this paper, we propose a structure-based suggestive exploration approach to support effective exploration of large networks by suggesting appropriate structures upon user request. Encoding nodes with vectorized representations by transforming information of surrounding structures of nodes into a high dimensional space, our approach can identify similar structures within a large network, enable user interaction with multiple similar structures simultaneously, and guide the exploration of unexplored structures. We develop a web-based visual exploration system to incorporate this suggestive exploration approach and compare performances of our approach under different vectorizing methods and networks. We also present the usability and effectiveness of our approach through a controlled user study with two datasets.",
"title": ""
},
{
"docid": "3a95b876619ce4b666278810b80cae77",
"text": "On 14 November 2016, northeastern South Island of New Zealand was struck by a major moment magnitude (Mw) 7.8 earthquake. Field observations, in conjunction with interferometric synthetic aperture radar, Global Positioning System, and seismology data, reveal this to be one of the most complex earthquakes ever recorded. The rupture propagated northward for more than 170 kilometers along both mapped and unmapped faults before continuing offshore at the island’s northeastern extent. Geodetic and field observations reveal surface ruptures along at least 12 major faults, including possible slip along the southern Hikurangi subduction interface; extensive uplift along much of the coastline; and widespread anelastic deformation, including the ~8-meter uplift of a fault-bounded block. This complex earthquake defies many conventional assumptions about the degree to which earthquake ruptures are controlled by fault segmentation and should motivate reevaluation of these issues in seismic hazard models.",
"title": ""
},
{
"docid": "66a4aa1e96596221729611add5390daf",
"text": "Table characteristics vary widely. Consequently, a great variety of computational approaches have been applied to table recognition. In this survey, the table recognition literature is presented as an interaction of table models, observations, transformations, and inferences. A table model defines the physical and logical structure of tables; the model is used to detect tables and to analyze and decompose the detected tables. Observations perform feature measurements and data lookup, transformations alter or restructure data, and inferences generate and test hypotheses. This presentation clarifies both the decisions made by a table recognizer and the assumptions and inferencing techniques that underlie these decisions.",
"title": ""
},
{
"docid": "295809398866d81cab85c44b145df56d",
"text": "This paper discusses the “Building-In Reliability” (BIR) approach to process development, particularly for technologies integrating Bipolar, CMOS, and DMOS devices (so-called BCD technologies). Examples of BIR reliability assessments include gate oxide integrity (GOI) through Time-Dependent Dielectric Breakdown (TDDB) studies and degradation of laterally diffused MOS (LDMOS) devices by Hot-Carrier Injection (HCI) stress. TDDB allows calculation of gate oxide failure rates based on operating voltage waveforms and temperature. HCI causes increases in LDMOS resistance (Rdson), which decreases efficiency in power applications.",
"title": ""
},
{
"docid": "975bc281e14246e29da61495e1e5dae1",
"text": "We have introduced the biomechanical research on snakes and developmental research on snake-like robots that we have been working on. We could not introduce everything we developed. There were also a smaller snake-like active endoscope; a large-sized snake-like inspection robot for nuclear reactor related facility, Koryu, 1 m in height, 3.5 m in length, and 350 kg in weight; and several other snake-like robots. Development of snake-like robots is still one of our latest research topics. We feel that the technical difficulties in putting snake-like robots into practice have almost been overcome by past research, so we believe that such practical use of snake-like robots can be realized soon.",
"title": ""
},
{
"docid": "f0ab3049cb9f66176c34a57d27592537",
"text": "We take a new, scenario-based look at evaluation in information visualization. Our seven scenarios, evaluating visual data analysis and reasoning, evaluating user performance, evaluating user experience, evaluating environments and work practices, evaluating communication through visualization, evaluating visualization algorithms, and evaluating collaborative data analysis were derived through an extensive literature review of over 800 visualization publications. These scenarios distinguish different study goals and types of research questions and are illustrated through example studies. Through this broad survey and the distillation of these scenarios, we make two contributions. One, we encapsulate the current practices in the information visualization research community and, two, we provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization. Scenarios can be used to choose appropriate research questions and goals and the provided examples can be consulted for guidance on how to design one's own study.",
"title": ""
}
] |
scidocsrr
|
6c4201253760c8d371447fd68afc0e03
|
Gamifying software development scrum projects
|
[
{
"docid": "2ccb76e0cda888491ebb37bb316c5490",
"text": "For any Software Process Improvement (SPI) initiative to succeed human factors, in particular, motivation and commitment of the people involved should be kept in mind. In fact, Organizational Change Management (OCM) has been identified as an essential knowledge area for any SPI initiative. However, enough attention is still not given to the human factors and therefore, the high degree of failures in the SPI initiatives is directly linked to a lack of commitment and motivation. Gamification discipline allows us to define mechanisms that drive people’s motivation and commitment towards the development of tasks in order to encourage and accelerate the acceptance of an SPI initiative. In this paper, a gamification framework oriented to both organization needs and software practitioners groups involved in an SPI initiative is defined. This framework tries to take advantage of the transverse nature of gamification in order to apply its Critical Success Factors (CSF) to the organizational change management of an SPI. Gamification framework guidelines have been validated by some qualitative methods. Results show some limitations that threaten the reliability of this validation. These require further empirical validation of a software organization.",
"title": ""
}
] |
[
{
"docid": "6cca53a0b41a981bb6a1707c55e924da",
"text": "During sustained high-intensity military training or simulated combat exercises, significant decreases in physical performance measures are often seen. The use of dietary supplements is becoming increasingly popular among military personnel, with more than half of the US soldiers deployed or garrisoned reported to using dietary supplements. β-Alanine is a popular supplement used primarily by strength and power athletes to enhance performance, as well as training aimed at improving muscle growth, strength and power. However, there is limited research examining the efficacy of β-alanine in soldiers conducting operationally relevant tasks. The gains brought about by β-alanine use by selected competitive athletes appears to be relevant also for certain physiological demands common to military personnel during part of their training program. Medical and health personnel within the military are expected to extrapolate and implement relevant knowledge and doctrine from research performed on other population groups. The evidence supporting the use of β-alanine in competitive and recreational athletic populations suggests that similar benefits would also be observed among tactical athletes. However, recent studies in military personnel have provided direct evidence supporting the use of β-alanine supplementation for enhancing combat-specific performance. This appears to be most relevant for high-intensity activities lasting 60–300 s. Further, limited evidence has recently been presented suggesting that β-alanine supplementation may enhance cognitive function and promote resiliency during highly stressful situations.",
"title": ""
},
{
"docid": "653ca5c9478b1b1487fc24eeea8c1677",
"text": "A fundamental question in information theory and in computer science is how to measure similarity or the amount of shared information between two sequences. We have proposed a metric, based on Kolmogorov complexity, to answer this question and have proven it to be universal. We apply this metric in measuring the amount of shared information between two computer programs, to enable plagiarism detection. We have designed and implemented a practical system SID (Software Integrity Diagnosis system) that approximates this metric by a heuristic compression algorithm. Experimental results demonstrate that SID has clear advantages over other plagiarism detection systems. SID system server is online at http://software.bioinformatics.uwaterloo.ca/SID/.",
"title": ""
},
{
"docid": "1b919e6f56e908902480c90d6f0d4ce0",
"text": "Vehicular Ad-hoc Network (VANET) is an emerging new technology to enable communications among vehicles and nearby roadside infrastructures to provide intelligent transportation applications. In order to provide stable connections between vehicles, a reliable routing protocol is needed. Currently, there are several routing protocols designed for MANETs could be applied to VANETs. However, due to the unique characteristics of VANETs, the results are not encouraging. In this paper, we propose a new routing protocol named AODV-VANET, which incorporates the vehicles' movement information into the route discovery process based on Ad hoc On-Demand Distance Vector (AODV). A Total Weight of the Route is introduced to choose the best route together with an expiration time estimation to minimize the link breakages. With these modifications, the proposed protocol is able to achieve better routing performances.",
"title": ""
},
{
"docid": "bc4b545faba28a81202e3660c32c7ec2",
"text": "This paper describes a novel two-stage fully-differential CMOS amplifier comprising two self-biased inverter stages, with optimum compensation and high efficiency. Although it relies on a class A topology, it is shown through simulations, that it achieves the highest efficiency of its class and comparable to the best class AB amplifiers. Due to the self-biasing, a low variability in the DC gain over process, temperature, and supply is achieved. A detailed circuit analysis, a design methodology for optimization and the most relevant simulation results are presented, together with a final comparison among state-of-the-art amplifiers.",
"title": ""
},
{
"docid": "e24f60bc524a69976f727cb847ed92fa",
"text": "In large scale and complex IT service environments, a problematic incident is logged as a ticket and contains the ticket summary (system status and problem description). The system administrators log the step-wise resolution description when such tickets are resolved. The repeating service events are most likely resolved by inferring similar historical tickets. With the availability of reasonably large ticket datasets, we can have an automated system to recommend the best matching resolution for a given ticket summary. In this paper, we first identify the challenges in real-world ticket analysis and develop an integrated framework to efficiently handle those challenges. The framework first quantifies the quality of ticket resolutions using a regression model built on carefully designed features. The tickets, along with their quality scores obtained from the resolution quality quantification, are then used to train a deep neural network ranking model that outputs the matching scores of ticket summary and resolution pairs. This ranking model allows us to leverage the resolution quality in historical tickets when recommending resolutions for an incoming incident ticket. In addition, the feature vectors derived from the deep neural ranking model can be effectively used in other ticket analysis tasks, such as ticket classification and clustering. The proposed framework is extensively evaluated with a large real-world dataset.",
"title": ""
},
{
"docid": "60dd1689962a702e72660b33de1f2a17",
"text": "A grammar formalism called GHRG based on CHR is proposed analogously to the way Definite Clause Grammars are defined and implemented on top of Prolog. A CHRG executes as a robust bottom-up parser with an inherent treatment of ambiguity. The rules of a CHRG may refer to grammar symbols on either side of a sequence to be matched and this provides a powerful way to let parsing and attribute evaluation depend on linguistic context; examples show disambiguation of simple and ambiguous context-free rules and a handling of coordination in natural language. CHRGs may have rules to produce and consume arbitrary hypothesis and as an important application is shown an implementation of Assumption Grammars.",
"title": ""
},
{
"docid": "070ecf3890362cb4c24682aff5fa01c6",
"text": "This review builds on self-control theory (Carver & Scheier, 1998) to develop a theoretical framework for investigating associations of implicit theories with self-regulation. This framework conceptualizes self-regulation in terms of 3 crucial processes: goal setting, goal operating, and goal monitoring. In this meta-analysis, we included articles that reported a quantifiable assessment of implicit theories and at least 1 self-regulatory process or outcome. With a random effects approach used, meta-analytic results (total unique N = 28,217; k = 113) across diverse achievement domains (68% academic) and populations (age range = 5-42; 10 different nationalities; 58% from United States; 44% female) demonstrated that implicit theories predict distinct self-regulatory processes, which, in turn, predict goal achievement. Incremental theories, which, in contrast to entity theories, are characterized by the belief that human attributes are malleable rather than fixed, significantly predicted goal setting (performance goals, r = -.151; learning goals, r = .187), goal operating (helpless-oriented strategies, r = -.238; mastery-oriented strategies, r = .227), and goal monitoring (negative emotions, r = -.233; expectations, r = .157). The effects for goal setting and goal operating were stronger in the presence (vs. absence) of ego threats such as failure feedback. Discussion emphasizes how the present theoretical analysis merges an implicit theory perspective with self-control theory to advance scholarship and unlock major new directions for basic and applied research.",
"title": ""
},
{
"docid": "c4dbf075f91d1a23dda421261911a536",
"text": "In cultures of the Litopenaeus vannamei with biofloc, the concentrations of nitrate rise during the culture period, which may cause a reduction in growth and mortality of the shrimps. Therefore, the aim of this study was to determine the effect of the concentration of nitrate on the growth and survival of shrimp in systems using bioflocs. The experiment consisted of four treatments with three replicates each: The concentrations of nitrate that were tested were 75 (control), 150, 300, and 600 mg NO3 −-N/L. To achieve levels above 75 mg NO3 −-N/L, different dosages of sodium nitrate (PA) were added. For this purpose, twelve experimental units with a useful volume of 45 L were stocked with 15 juvenile L. vannamei (1.30 ± 0.31 g), corresponding to a stocking density of 333 shrimps/m3, that were reared for an experimental period of 42 days. Regarding the water quality parameters measured throughout the study, no significant differences were detected (p > 0.05). Concerning zootechnical performance, a significant difference (p < 0.05) was verified with the 75 (control) and 150 treatments presenting the best performance indexes, while the 300 and 600 treatments led to significantly poorer results (p < 0.05). The histopathological damage was observed in the gills and hepatopancreas of the shrimps exposed to concentrations ≥300 mg NO3 −-N/L for 42 days, and poorer zootechnical performance and lower survival were observed in the shrimps reared at concentrations ≥300 mg NO3 −-N/L under a salinity of 23. The results obtained in this study show that concentrations of nitrate up to 177 mg/L are acceptable for the rearing of L. vannamei in systems with bioflocs, without renewal of water, at a salinity of 23.",
"title": ""
},
{
"docid": "1014860e267cf8b36c118bb32995b34f",
"text": "Recently, several indoor localization solutions based on WiFi, Bluetooth, and UWB have been proposed. Due to the limitation and complexity of the indoor environment, the solution to achieve a low-cost and accurate positioning system remains open. This article presents a WiFibased positioning technique that can improve the localization performance from the bottleneck in ToA/AoA. Unlike the traditional approaches, our proposed mechanism relaxes the need for wide signal bandwidth and large numbers of antennas by utilizing the transmission of multiple predefined messages while maintaining high-accuracy performance. The overall system structure is demonstrated by showing localization performance with respect to different numbers of messages used in 20/40 MHz bandwidth WiFi APs. Simulation results show that our WiFi-based positioning approach can achieve 1 m accuracy without any hardware change in commercial WiFi products, which is much better than the conventional solutions from both academia and industry concerning the trade-off of cost and system complexity.",
"title": ""
},
{
"docid": "c10829be320a9be6ecbc9ca751e8b56e",
"text": "This article analyzes two decades of research regarding the mass media's role in shaping, perpetuating, and reducing the stigma of mental illness. It concentrates on three broad areas common in media inquiry: production, representation, and audiences. The analysis reveals that descriptions of mental illness and the mentally ill are distorted due to inaccuracies, exaggerations, or misinformation. The ill are presented not only as peculiar and different, but also as dangerous. Thus, the media perpetuate misconceptions and stigma. Especially prominent is the absence of agreed-upon definitions of \"mental illness,\" as well as the lack of research on the inter-relationships in audience studies between portrayals in the media and social perceptions. The analysis concludes with suggestions for further research on mass media's inter-relationships with mental illness.",
"title": ""
},
{
"docid": "ca6d23374e0caa125a91618164284b9a",
"text": "We propose a spectral clustering algorithm for the multi-view setting where we have access to multiple views of the data, each of which can be independently used for clustering. Our spectral clustering algorithm has a flavor of co-training, which is already a widely used idea in semi-supervised learning. We work on the assumption that the true underlying clustering would assign a point to the same cluster irrespective of the view. Hence, we constrain our approach to only search for the clusterings that agree across the views. Our algorithm does not have any hyperparameters to set, which is a major advantage in unsupervised learning. We empirically compare with a number of baseline methods on synthetic and real-world datasets to show the efficacy of the proposed algorithm.",
"title": ""
},
{
"docid": "9430b0f220538e878d99ef410fdc1ab2",
"text": "The prevalence of pregnancy, substance abuse, violence, and delinquency among young people is unacceptably high. Interventions for preventing problems in large numbers of youth require more than individual psychological interventions. Successful interventions include the involvement of prevention practitioners and community residents in community-level interventions. The potential of community-level interventions is illustrated by a number of successful studies. However, more inclusive reviews and multisite comparisons show that although there have been successes, many interventions did not demonstrate results. The road to greater success includes prevention science and newer community-centered models of accountability and technical assistance systems for prevention.",
"title": ""
},
{
"docid": "c2ade16afaf22ac6cc546134a1227d68",
"text": "In this work we present a novel method for the challenging problem of depth image up sampling. Modern depth cameras such as Kinect or Time-of-Flight cameras deliver dense, high quality depth measurements but are limited in their lateral resolution. To overcome this limitation we formulate a convex optimization problem using higher order regularization for depth image up sampling. In this optimization an an isotropic diffusion tensor, calculated from a high resolution intensity image, is used to guide the up sampling. We derive a numerical algorithm based on a primal-dual formulation that is efficiently parallelized and runs at multiple frames per second. We show that this novel up sampling clearly outperforms state of the art approaches in terms of speed and accuracy on the widely used Middlebury 2007 datasets. Furthermore, we introduce novel datasets with highly accurate ground truth, which, for the first time, enable to benchmark depth up sampling methods using real sensor data.",
"title": ""
},
{
"docid": "784d75662234e45f78426c690356d872",
"text": "Chinese-English parallel corpora are key resources for Chinese-English cross-language information processing, Chinese-English bilingual lexicography, Chinese-English language research and teaching. But so far large-scale Chinese-English corpus is still unavailable yet, given the difficulties and the intensive labours required. In this paper, our work towards building a large-scale Chinese-English parallel corpus is presented. We elaborate on the collection, annotation and mark-up of the parallel Chinese-English texts and the workflow that we used to construct the corpus. In addition, we also present our work toward building tools for constructing and using the corpus easily for different purposes. Among these tools, a parallel concordance tool developed by us is examined in detail. Several applications of the corpus being conducted are also introduced briefly in the paper.",
"title": ""
},
{
"docid": "bf11641b432e551d61c56180d8f0e8eb",
"text": "Deep Reinforcement Learning algorithms lead to agents that can solve difficult decision making problems in complex environments. However, many difficult multi-agent competitive games, especially real-time strategy games are still considered beyond the capability of current deep reinforcement learning algorithms, although there has been a recent effort to change this (OpenAI, 2017; Vinyals et al., 2017). Moreover, when the opponents in a competitive game are suboptimal, the current Nash Equilibrium seeking, selfplay algorithms are often unable to generalize their strategies to opponents that play strategies vastly different from their own. This suggests that a learning algorithm that is beyond conventional self-play is necessary. We develop Hierarchical Agent with Self-Play , a learning approach for obtaining hierarchically structured policies that can achieve higher performance than conventional self-play on competitive games through the use of a diverse pool of sub-policies we get from Counter Self-Play (CSP). We demonstrate that the ensemble policy generated by Hierarchical Agent with Self-Play can achieve better performance while facing unseen opponents that use sub-optimal policies. On a motivating iterated Rock-Paper-Scissor game and a partially observable real-time strategic game (http://generals.io/), we are led to the conclusion that Hierarchical Agent with Self-Play can perform better than conventional self-play as well as achieve 77% win rate against FloBot, an open-source agent which has ranked at position number 2 on the online leaderboards.",
"title": ""
},
{
"docid": "a6cf168632efb2a4c4a4d91c4161dc24",
"text": "This paper presents a systematic approach to transform various fault models to a unified model such that all faults of interest can be handled in one ATPG run. The fault models that can be transformed include, but are not limited to, stuck-at faults, various types of bridging faults, and cell-internal faults. The unified model is the aggressor-victim type of bridging fault model. Two transformation methods, namely fault-based and pattern-based transformations, are developed for cell-external and cell-internal faults, respectively. With the proposed approach, one can use an ATPG tool for bridging faults to deal with the test generation problems of multiple fault models simultaneously. Hence the total test generation time can be reduced and highly compact test sets can be obtained. Experimental results show that on average 54.94% (16.45%) and 47.22% (17.51%) test pattern volume reductions are achieved compared to the method that deals with the three fault models separately without (with) fault dropping for ISCAS'89 andIWLS'05 circuits, respectively.",
"title": ""
},
{
"docid": "1b556f4e0c69c81780973a7da8ba2f8e",
"text": "We explore ways of allowing for the offloading of computationally rigorous tasks from devices with slow logical processors onto a network of anonymous peer-processors. Recent advances in secret sharing schemes, decentralized consensus mechanisms, and multiparty computation (MPC) protocols are combined to create a P2P MPC market. Unlike other computational ”clouds”, ours is able to generically compute any arithmetic circuit, providing a viable platform for processing on the semantic web. Finally, we show that such a system works in a hostile environment, that it scales well, and that it adapts very easily to any future advances in the complexity theoretic cryptography used. Specifically, we show that the feasibility of our system can only improve, and is historically guaranteed to do so.",
"title": ""
},
{
"docid": "4c5eb84d510b9a2d064bfd53d981934f",
"text": "Video-game playing is popular among college students. Cognitive and negative consequences have been studied frequently. However, little is known about the influence of gaming behavior on IT college students’ academic performance. An increasing number of college students take online courses, use social network websites for social interactions, and play video games online. To analyze the relationship between college students’ gaming behavior and their academic performance, a research model is proposed and a survey study is conducted. The study result of a multiple regression analysis shows that self-control capability, social interaction using face-to-face or phone communications, and playing video games using a personal computer make statistically significant contributions to the IT college students’ academic performance measured by GPA.",
"title": ""
},
{
"docid": "ab1b4a5694e17772b01a2156afc08f55",
"text": "Clunealgia is caused by neuropathy of inferior cluneal branches of the posterior femoral cutaneous nerve resulting in pain in the inferior gluteal region. Image-guided anesthetic nerve injections are a viable and safe therapeutic option in sensory peripheral neuropathies that provides significant pain relief when conservative therapy fails and surgery is not desired or contemplated. The authors describe two cases of clunealgia, where computed-tomography-guided technique for nerve blocks of the posterior femoral cutaneous nerve and its branches was used as a cheaper, more convenient, and faster alternative with similar face validity as the previously described magnetic-resonance-guided injection.",
"title": ""
},
{
"docid": "857d8003dff05b8e1ba5eeb8f6b3c14e",
"text": "Traditional static spectrum allocation policies have been to grant each wireless service exclusive usage of certain frequency bands, leaving several spectrum bands unlicensed for industrial, scientific and medical purposes. The rapid proliferation of low-cost wireless applications in unlicensed spectrum bands has resulted in spectrum scarcity among those bands. Since most applications in Wireless Sensor Networks (WSNs) utilize the unlicensed spectrum, network-wide performance of WSNs will inevitably degrade as their popularity increases. Sharing of under-utilized licensed spectrum among unlicensed devices is a promising solution to the spectrum scarcity issue. Cognitive Radio (CR) is a new paradigm in wireless communication that allows sensor nodes as the unlicensed users or Secondary Users (SUs) to detect and use the under-utilized licensed spectrum temporarily. Given that the licensed or Primary Users (PUs) are oblivious to the presence of SUs, the SUs access the licensed spectrum opportunistically without interfering the PUs, while improving their own performance. In this paper, we propose an approach to build Cognitive Radio-based Wireless Sensor Networks (CR-WSNs). We believe that CR-WSN is the next-generation WSN. Realizing that both WSNs and CR present unique challenges to the design of CR-WSNs, we provide an overview and conceptual design of WSNs from the perspective of CR. The open issues are discussed to motivate new research interests in this field. We also present our method to achieving context-awareness and intelligence, which are the key components in CR networks, to address an open issue in CR-WSN.",
"title": ""
}
] |
scidocsrr
|
1a29bbccbf9a5b397dfecc734c31f6e2
|
Statistical approaches for enhancing causal interpretation of the M to Y relation in mediation analysis.
|
[
{
"docid": "102a9eb7ba9f65a52c6983d74120430e",
"text": "A key aim of social psychology is to understand the psychological processes through which independent variables affect dependent variables in the social domain. This objective has given rise to statistical methods for mediation analysis. In mediation analysis, the significance of the relationship between the independent and dependent variables has been integral in theory testing, being used as a basis to determine (1) whether to proceed with analyses of mediation and (2) whether one or several proposed mediator(s) fully or partially accounts for an effect. Synthesizing past research and offering new arguments, we suggest that the collective evidence raises considerable concern that the focus on the significance between the independent and dependent variables, both before and after mediation tests, is unjustified and can impair theory development and testing. To expand theory involving social psychological processes, we argue that attention in mediation analysis should be shifted towards assessing the magnitude and significance of indirect effects. Understanding the psychological processes by which independent variables affect dependent variables in the social domain has long been of interest to social psychologists. Although moderation approaches can test competing psychological mechanisms (e.g., Petty, 2006; Spencer, Zanna, & Fong, 2005), mediation is typically the standard for testing theories regarding process (e.g., Baron & Kenny, 1986; James & Brett, 1984; Judd & Kenny, 1981; MacKinnon, 2008; MacKinnon, Lockwood, Hoffman, West, & Sheets, 2002; Muller, Judd, & Yzerbyt, 2005; Preacher & Hayes, 2004; Preacher, Rucker, & Hayes, 2007; Shrout & Bolger, 2002). For example, dual process models of persuasion (e.g., Petty & Cacioppo, 1986) often distinguish among competing accounts by measuring the postulated underlying process (e.g., thought favorability, thought confidence) and examining their viability as mediators (Tormala, Briñol, & Petty, 2007). Thus, deciding on appropriate requirements for mediation is vital to theory development. Supporting the high status of mediation analysis in our field, MacKinnon, Fairchild, and Fritz (2007) report that research in social psychology accounts for 34% of all mediation tests in psychology more generally. In our own analysis of journal articles published from 2005 to 2009, we found that approximately 59% of articles in the Journal of Personality and Social Psychology (JPSP) and 65% of articles in Personality and Social Psychology Bulletin (PSPB) included at least one mediation test. Consistent with the observations of MacKinnon et al., we found that the bulk of these analyses continue to follow the causal steps approach outlined by Baron and Kenny (1986). Social and Personality Psychology Compass 5/6 (2011): 359–371, 10.1111/j.1751-9004.2011.00355.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd The current article examines the viability of the causal steps approach in which the significance of the relationship between an independent variable (X) and a dependent variable (Y) are tested both before and after controlling for a mediator (M) in order to examine the validity of a theory specifying mediation. Traditionally, the X fi Y relationship is tested prior to mediation to determine whether there is an effect to mediate, and it is also tested after introducing a potential mediator to determine whether that mediator fully or partially accounts for the effect. At first glance, the requirement of a significant X fi Y association prior to examining mediation seems reasonable. If there is no significant X fi Y relationship, how can there be any mediation of it? Furthermore, the requirement that X fi Y become nonsignificant when controlling for the mediator seems sensible in order to claim ‘full mediation’. What is the point of hypothesizing or testing for additional mediators if the inclusion of one mediator renders the initial relationship indistinguishable from zero? Despite the intuitive appeal of these requirements, the present article raises serious concerns about their use.",
"title": ""
}
] |
[
{
"docid": "79ff4f84edc8d49d046de1f9392a6a38",
"text": "Search results visualization has emerged as an important research topic due to its application on search engine amelioration. From the perspective of machine learning, the text search results visualization task fits to the multi-label learning framework that a document is usually related to multiple category labels. In this paper, a Näıve Bayesian (NB) multi-label classification algorithm is proposed by incorporating a two-step feature selection strategy which aims to satisfy the assumption of conditional independency in NB classification theory. The experiments over public data set demonstrate that the proposed method has highly competitive performance with several well-established multi-label classification algorithms. We implement a prototype system named TJ-MLWC based on the proposed algorithm, which acts as an intermediate layer between users and a commercial Internet Search Engine, allowing the search results of a query displaying by one or multiple categories. Testing results indicate that our prototype improves search experience by adding the function of browsing search results by category.",
"title": ""
},
{
"docid": "521fb63129bde3f7448c4b67ead1adae",
"text": "The pore-scale numerical works on the effective thermal conductivity and melting process of copper foam filled with paraffin, and a phase-change material (PCM) with low thermal conductivity, were conducted by utilizing the two-dimensional (2D) hexahedron Calmidi-Mahajan (C-M) model and the three-dimensional (3D) dodecahedron Boomsma-Poulikakos (B-P) model. The unidirectional heat transfer experiment was established to investigate the effective thermal conductivity of the composite. The simulation results of the effective thermal conductivity of the composite in 2D C-M model were 6.93, 5.41, 4.22 and 2.75 W/(m·K), for porosity of 93%, 95%, 96% and 98% respectively, while the effective thermal conductivity of the composite in 3D B-P model were 7.07, 5.24, 3.07 and 1.22 W/(m·K). The simulated results were in agreement with the experimental data obtained for the composite. It was found that the copper foam can effectively enhance the thermal conductivity of the paraffin, i.e., the smaller the porosity of copper foam, the higher the effective thermal conductivity of the composite. In addition, the Fluent Solidification/Melting model was applied to numerically investigate the melting process of the paraffin in the pore. Lastly, the solid–liquid interface development, completely melted time and temperature field distribution of paraffin in the pore of copper foam were also discussed.",
"title": ""
},
{
"docid": "323dc0695ea4f5d2e848cb8e33037686",
"text": "A new measure of hypersensitive narcissism was derived by correlating the items of H. A. Murray’s (1938) Narcism Scale with an MMPI-based composite measure of covert narcissism. In three samples of college students (total N 5 403), 10 items formed a reliable measure: the Hypersensitive Narcissism Scale (HSNS). The new HSNS and the MMPI-based composite showed similar patterns of correlations with the Big Five Inventory, and both measures correlated near zero with the Narcissistic Personality Inventory, which assesses overt narcissism. Results support P. Wink’s (1991) distinction between covert and overt narcissistic tendencies in the normal range of individual differences and suggest that it would be beneficial for personality researchers to measure both types of narcissism in future studies. 1997 Academic",
"title": ""
},
{
"docid": "af3b0fb6b2babe8393b2e715f92a2c97",
"text": "Collaboration is the “mutual engagement of participants in a coordinated effort to solve a problem together.” Collaborative interactions are characterized by shared goals, symmetry of structure, and a high degree of negotiation, interactivity, and interdependence. Interactions producing elaborated explanations are particularly valuable for improving student learning. Nonresponsive feedback, on the other hand, can be detrimental to student learning in collaborative situations. Collaboration can have powerful effects on student learning, particularly for low-achieving students. However, a number of factors may moderate the impact of collaboration on student learning, including student characteristics, group composition, and task characteristics. Although historical frameworks offer some guidance as to when and how children acquire and develop collaboration skills, there is scant empirical evidence to support such predictions. However, because many researchers appear to believe children can be taught to collaborate, they urge educators to provide explicit instruction that encourages development of skills such as coordination, communication, conflict resolution, decision-making, problemsolving, and negotiation. Such training should also emphasize desirable qualities of interaction, such as providing elaborated explanations, asking direct and specific questions, and responding appropriately to the requests of others. Teachers should structure tasks in ways that will support the goals of collaboration, specify “ground rules” for interaction, and regulate such interactions. There are a number of challenges in using group-based tasks to assess collaboration. Several suggestions for assessing collaboration skills are made.",
"title": ""
},
{
"docid": "d97223d4bf69fa4879f997cca7eaa226",
"text": "Detecting execution anomalies is very important to monitoring and maintenance of cloud systems. People often use execution logs for troubleshooting and problem diagnosis, which is time consuming and error-prone. There is great demand for automatic anomaly detection based on logs. In this paper, we mine a time-weighted control flow graph (TCFG) that captures healthy execution flows of each component in cloud, and automatically raise anomaly alerts on observing deviations from TCFG. We outlined three challenges that are solved in this paper, including how to deal with the interleaving of multiple threads in logs, how to identify operational logs that do not contain any transactional information, and how to split the border of each transaction flow in the TCFG. We evaluate the effectiveness of our approach by leveraging logs from an IBM public cloud production platform and two simulated systems in the lab environment. The evaluation results show that our TCFG mining and anomaly diagnosis both perform over 80% precision and recall on average.",
"title": ""
},
{
"docid": "03a39c98401fc22f1a376b9df66988dc",
"text": "A highly efficient wireless power transfer (WPT) system is required in many applications to replace the conventional wired system. The high temperature superconducting (HTS) wires are examined in a WPT system to increase the power-transfer efficiency (PTE) as compared with the conventional copper/Litz conductor. The HTS conductors are naturally can produce higher amount of magnetic field with high induced voltage to the receiving coil. Moreover, the WPT systems are prone to misalignment, which can cause sudden variation in the induced voltage and lead to rapid damage of the resonant capacitors connected in the circuit. Hence, the protection or elimination of resonant capacitor is required to increase the longevity of WPT system, but both the adoptions will operate the system in nonresonance mode. The absence of resonance phenomena in the WPT system will drastically reduce the PTE and correspondingly the future commercialization. This paper proposes an open bifilar spiral coils based self-resonant WPT method without using resonant capacitors at both the sides. The mathematical modeling and circuit simulation of the proposed system is performed by designing the transmitter coil using HTS wire and the receiver with copper coil. The three-dimensional modeling and finite element simulation of the proposed system is performed to analyze the current density at different coupling distances between the coil. Furthermore, the experimental results show the PTE of 49.8% under critical coupling with the resonant frequency of 25 kHz.",
"title": ""
},
{
"docid": "ab400c41db805b1574e8db80f72e47bd",
"text": "Radiation from printed millimeter-wave antennas integrated in mobile terminals is affected by surface currents on chassis, guided waves trapped in dielectric layers, superstrates, and the user’s hand, making mobile antenna design for 5G communication challenging. In this paper, four canonical types of printed 28-GHz antenna elements are integrated in a 5G mobile terminal mock-up. Different kinds of terminal housing effects are examined separately, and the terminal housing effects are also diagnosed through equivalent currents by using the inverse source technique. To account for the terminal housing effects on a beam-scanning antenna subarray, we propose the effective beam-scanning efficiency to evaluate its coverage performance. This paper presents the detailed analysis, results, and new concepts regarding the terminal housing effects, and thereby provides valuable insight into the practical 5G mobile antenna design and radiation performance characterization.",
"title": ""
},
{
"docid": "21c15eb5420a7345cc2900f076b15ca1",
"text": "Prokaryotic CRISPR-Cas genomic loci encode RNA-mediated adaptive immune systems that bear some functional similarities with eukaryotic RNA interference. Acquired and heritable immunity against bacteriophage and plasmids begins with integration of ∼30 base pair foreign DNA sequences into the host genome. CRISPR-derived transcripts assemble with CRISPR-associated (Cas) proteins to target complementary nucleic acids for degradation. Here we review recent advances in the structural biology of these targeting complexes, with a focus on structural studies of the multisubunit Type I CRISPR RNA-guided surveillance and the Cas9 DNA endonuclease found in Type II CRISPR-Cas systems. These complexes have distinct structures that are each capable of site-specific double-stranded DNA binding and local helix unwinding.",
"title": ""
},
{
"docid": "0d8c13e68b57781300e9f2666141e2eb",
"text": "Currently most car drivers use static routing devices based on the shortest distance between start and end position. But the shortest route can differ from the shortest route in time. To compute alternative routes it is necessary to have good prediction models of expected congestions and a fast algorithm to compute the shortest path while being able to react to dynamic changes in the network caused by special incidents. In this paper we present a dynamic routing system based on Ant Based Control (ABC). Starting from historical traffic data, ants are used to compute and predict the travel times along the road segments. They are finding the fastest routes not only looking to the past and present traffic conditions but also trying to anticipate and avoid future congestions.",
"title": ""
},
{
"docid": "9f0e7fbe10ce2998dac649b6a71e58a6",
"text": "A method of workspace modelling for spherical parallel manipulators (SPMs) of symmetrical architecture is developed by virtue of Euler parameters in the paper. The adoption of Euler parameters in the expression of spatial rotations of SPMs helps not only to eliminate the possible singularity in the rotation matrix, but also to formulate all equations in polynomials, which are more easily manipulated. Moreover, a homogeneous workspace can be obtained with Euler parameters for the SPMs, which facilitates the evaluation of dexterity. In this work, the problem of workspace modelling and analysis is formulated in terms of Euler parameters. An equation dealing with boundary surfaces is derived and branches of boundary surface are identified. Evaluation of dexterity is explored to quantitatively describe the capability of a manipulator to attain orientations. The singularity identification is also addressed. Examples are included to demonstrate the application of the proposed method.",
"title": ""
},
{
"docid": "a793d0cda70755cb0b0e2c7791ba53ec",
"text": "We consider the problem of estimating detailed 3D structure from a single still image of an unstructured environment. Our goal is to create 3D models that are both quantitatively accurate as well as visually pleasing. For each small homogeneous patch in the image, we use a Markov random field (MRF) to infer a set of \"plane parametersrdquo that capture both the 3D location and 3D orientation of the patch. The MRF, trained via supervised learning, models both image depth cues as well as the relationships between different parts of the image. Other than assuming that the environment is made up of a number of small planes, our model makes no explicit assumptions about the structure of the scene; this enables the algorithm to capture much more detailed 3D structure than does prior art and also give a much richer experience in the 3D flythroughs created using image-based rendering, even for scenes with significant nonvertical structure. Using this approach, we have created qualitatively correct 3D models for 64.9 percent of 588 images downloaded from the Internet. We have also extended our model to produce large-scale 3D models from a few images.",
"title": ""
},
{
"docid": "1f6637ecfc9415dd0f827ab6d3149af3",
"text": "Impaired renal function due to acute kidney injury (AKI) and/or chronic kidney diseases (CKD) is frequent in cirrhosis. Recurrent episodes of AKI may occur in end-stage cirrhosis. Differential diagnosis between functional (prerenal and hepatorenal syndrome) and acute tubular necrosis (ATN) is crucial. The concept that AKI and CKD represent a continuum rather than distinct entities, is now emerging. Not all patients with AKI have a potential for full recovery. Precise evaluation of kidney function and identification of kidney changes in patients with cirrhosis is central in predicting reversibility. This review examines current biomarkers for assessing renal function and identifying the cause and mechanisms of impaired renal function. When CKD is suspected, clearance of exogenous markers is the reference to assess glomerular filtration rate, as creatinine is inaccurate and cystatin C needs further evaluation. Recent biomarkers may help differentiate ATN from hepatorenal syndrome. Neutrophil gelatinase-associated lipocalin has been the most extensively studied biomarker yet, however, there are no clear-cut values that differentiate each of these conditions. Studies comparing ATN and hepatorenal syndrome in cirrhosis, do not include a gold standard. Combinations of innovative biomarkers are attractive to identify patients justifying simultaneous liver and kidney transplantation. Accurate biomarkers of underlying CKD are lacking and kidney biopsy is often contraindicated in this population. Urinary microRNAs are attractive although not definitely validated. Efforts should be made to develop biomarkers of kidney fibrosis, a common and irreversible feature of CKD, whatever the cause. Biomarkers of maladaptative repair leading to irreversible changes and CKD after AKI are also promising.",
"title": ""
},
{
"docid": "d1291a368157becd881f00a41ea03dd5",
"text": "Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks.",
"title": ""
},
{
"docid": "e51d244f45cda8826dc94ba35a12d066",
"text": "This article describes part of our contribution to the “Bell Kor’s Pragmatic Chaos” final solution, which won the Netflix Grand Prize. The other portion of the contribution was creat ed while working at AT&T with Robert Bell and Chris Volinsky, as reported in our 2008 Progress Prize report [3]. The final solution includes all the predictors described there. In th is article we describe only the newer predictors. So what is new over last year’s solution? First we further improved the baseline predictors (Sec. III). This in turn impr oves our other models, which incorporate those predictors, like the matrix factorization model (Sec. IV). In addition, an exten sion of the neighborhood model that addresses temporal dynamics was introduced (Sec. V). On the Restricted Boltzmann Machines (RBM) front, we use a new RBM model with superior accuracy by conditioning the visible units (Sec. VI). The fin al addition is the introduction of a new blending algorithm, wh ich is based on gradient boosted decision trees (GBDT) (Sec. VII ).",
"title": ""
},
{
"docid": "6a85b9ecb1aa3bbac2d7e05a79e865e4",
"text": "Image representations derived from pre-trained Convolutional Neural Networks (CNNs) have become the new state of the art in computer vision tasks such as instance retrieval. This work explores the suitability for instance retrieval of image-and region-wise representations pooled from an object detection CNN such as Faster R-CNN. We take advantage of the object proposals learned by a Region Proposal Network (RPN) and their associated CNN features to build an instance search pipeline composed of a first filtering stage followed by a spatial reranking. We further investigate the suitability of Faster R-CNN features when the network is fine-tuned for the same objects one wants to retrieve. We assess the performance of our proposed system with the Oxford Buildings 5k, Paris Buildings 6k and a subset of TRECVid Instance Search 2013, achieving competitive results.",
"title": ""
},
{
"docid": "e13d9f685cff72248eb3744a13d2079a",
"text": "48 AI MAGAZINE AI and human-computer interaction (HCI) are converging. “Usable AI” conference events in 2008 and 2009 preceded this special issue, and ACM will launch a widely-supported Transactions on Interactive Intelligent Systems. AI techniques are in the toolset of more and more HCI researchers, and applications of machine learning are increasingly visible in the HCI literature. Other maturing AI technologies seek input from the HCI community. The two fields have met under shared tents for some time, notably within International Journal of Man-Machine Studies (subsequently International Journal of Human-Computer Studies) and at the Intelligent User Interface conferences cosponsored by ACM’s Special Interest Groups on Computer-Human Interaction (SIGCHI) and Artificial Intelligence (SIGART). But little of this research has flowed back to the major AI and HCI conferences and journals. In this article, I describe some research that has bridged the fields, but contact has been sporadic. Logically, they could have been closer. Both explore the nexus of computing and intelligent behavior. Both claim Allen Newell and Herb Simon as founding figures. Working over the years as an HCI person in AI groups at Wang Laboratories, MIT, MCC, and Microsoft, and alongside AI faculty at Aarhus University and the University of California, Irvine, I was puzzled by the separation. The introduction to this special issue notes the different “monocular views” of interaction with intelligent systems. AI focused on devising better algorithms, HCI on how to improve the use of existing algorithms. AI originated in mathematics and engineering, HCI in psychology. But half a century is enough time to spawn a hybrid or synthesis had forces not pushed the two fields apart.",
"title": ""
},
{
"docid": "9bbf2a9f5afeaaa0f6ca12e86aef8e88",
"text": "Phishing is a model problem for illustrating usability concerns of privacy and security because both system designers and attackers battle using user interfaces to guide (or misguide) users.We propose a new scheme, Dynamic Security Skins, that allows a remote web server to prove its identity in a way that is easy for a human user to verify and hard for an attacker to spoof. We describe the design of an extension to the Mozilla Firefox browser that implements this scheme.We present two novel interaction techniques to prevent spoofing. First, our browser extension provides a trusted window in the browser dedicated to username and password entry. We use a photographic image to create a trusted path between the user and this window to prevent spoofing of the window and of the text entry fields.Second, our scheme allows the remote server to generate a unique abstract image for each user and each transaction. This image creates a \"skin\" that automatically customizes the browser window or the user interface elements in the content of a remote web page. Our extension allows the user's browser to independently compute the image that it expects to receive from the server. To authenticate content from the server, the user can visually verify that the images match.We contrast our work with existing anti-phishing proposals. In contrast to other proposals, our scheme places a very low burden on the user in terms of effort, memory and time. To authenticate himself, the user has to recognize only one image and remember one low entropy password, no matter how many servers he wishes to interact with. To authenticate content from an authenticated server, the user only needs to perform one visual matching operation to compare two images. Furthermore, it places a high burden of effort on an attacker to spoof customized security indicators.",
"title": ""
},
{
"docid": "e0fc6fc1425bb5786847c3769c1ec943",
"text": "Developing manufacturing simulation models usually requires experts with knowledge of multiple areas including manufacturing, modeling, and simulation software. The expertise requirements increase for virtual factory models that include representations of manufacturing at multiple resolution levels. This paper reports on an initial effort to automatically generate virtual factory models using manufacturing configuration data in standard formats as the primary input. The execution of the virtual factory generates time series data in standard formats mimicking a real factory. Steps are described for auto-generation of model components in a software environment primarily oriented for model development via a graphic user interface. Advantages and limitations of the approach and the software environment used are discussed. The paper concludes with a discussion of challenges in verification and validation of the virtual factory prototype model with its multiple hierarchical models and future directions.",
"title": ""
},
{
"docid": "52a01a3bb4122e313c3146363b3fb954",
"text": "We demonstrate how movements of multiple people or objects within a building can be displayed on a network representation of the building, where nodes are rooms and edges are doors. Our representation shows the direction of movements between rooms and the order in which rooms are visited, while avoiding occlusion or overplotting when there are repeated visits or multiple moving people or objects. We further propose the use of a hybrid visualization that mixes geospatial and topological (network-based) representations, enabling focus-in-context and multi-focal visualizations. An experimental comparison found that the topological representation was significantly faster than the purely geospatial representation for three out of four tasks.",
"title": ""
},
{
"docid": "5d52830a1f24dfb74f9425dbc376728e",
"text": "In this paper, the performance of air-cored (ironless) stator axial flux permanent magnet machines with different types of concentrated-coil nonoverlapping windings is evaluated. The evaluation is based on theoretical analysis and is confirmed by finite-element analysis and measurements. It is shown that concentrated-coil winding machines can have a similar performance as that of normal overlapping winding machines using less copper.",
"title": ""
}
] |
scidocsrr
|
d8ba24d3007114c30a4b381a8ab581b4
|
An Unsupervised Speaker Clustering Technique based on SOM and I-vectors for Speech Recognition Systems
|
[
{
"docid": "83525470a770a036e9c7bb737dfe0535",
"text": "It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.",
"title": ""
}
] |
[
{
"docid": "72a490e38f09001ab8e05d0427542647",
"text": "Systems based on i–vectors represent the current state–of–the–art in text-independent speaker recognition. Unlike joint factor analysis JFA, which models both speaker and intersession subspaces separately, in the i–vector approach all the important variability is modeled in a single low-dimensional subspace. This paper is based on the observation that JFA estimates a more informative speaker subspace than the “total variability” i–vector subspace, because the latter is obtained by considering each training segment as belonging to a different speaker. We propose a speaker modeling approach that extracts a compact representation of a speech segment, similar to the speaker factors of JFA and to i–vectors, referred to as “e–vector.” Estimating the e–vector subspace follows a procedure similar to i–vector training, but produces a more accurate speaker subspace, as confirmed by the results of a set of tests performed on the NIST 2012 and 2010 Speaker Recognition Evaluations. Simply replacing the i–vectors with e–vectors we get approximately 10% average improvement of the C $_{\\text{primary}}$ cost function, using different systems and classifiers. It is worth noting that these performance gains come without any additional memory or computational costs with respect to the standard i–vector systems.",
"title": ""
},
{
"docid": "d874ab5fd259fbc5e4afd66432ef5497",
"text": "Camera tracking for uncalibrated image sequences has now reached a level of maturity where 3D point structure and cameras can be recovered automatically for a significant class of scene types and camera motions. However, problems still occur, and their solution requires a combination of theoretical analysis and good engineering. We describe several such problems including missing data, degeneracy and deviations from the pinhole camera model, and discuss their solutions. We also discuss the incorporation of prior knowledge and the case of multiple rigid motions.",
"title": ""
},
{
"docid": "3b07476ebb8b1d22949ec32fc42d2d05",
"text": "We provide a systematic review of the adaptive comanagement (ACM) literature to (i) investigate how the concept of governance is considered and (ii) examine what insights ACM offers with reference to six key concerns in environmental governance literature: accountability and legitimacy; actors and roles; fit, interplay, and scale; adaptiveness, flexibility, and learning; evaluation and monitoring; and, knowledge. Findings from the systematic review uncover a complicated relationship with evidence of conceptual closeness as well as relational ambiguities. The findings also reveal several specific contributions from the ACM literature to each of the six key environmental governance concerns, including applied strategies for sharing power and responsibility and value of systems approaches in understanding problems of fit. More broadly, the research suggests a dissolving or fuzzy boundary between ACM and governance, with implications for understanding emerging approaches to navigate social-ecological system change. Future research opportunities may be found at the confluence of ACM and environmental governance scholarship, such as identifying ways to build adaptive capacity and encouraging the development of more flexible governance arrangements.",
"title": ""
},
{
"docid": "dd7f7d18b12cb71ed4c3acecf6383462",
"text": "Identifying malicious software executables is made difficult by the constant adaptations introduced by miscreants in order to evade detection by antivirus software. Such changes are akin to mutations in biological sequences. Recently, high-throughput methods for gene sequence classification have been developed by the bioinformatics and computational biology communities. In this paper, we apply methods designed for gene sequencing to detect malware in a manner robust to attacker adaptations. Whereas most gene classification tools are optimized for and restricted to an alphabet of four letters (nucleic acids), we have selected the Strand gene sequence classifier for malware classification. Strand’s design can easily accommodate unstructured data with any alphabet, including source code or compiled machine code. To demonstrate that gene sequence classification tools are suitable for classifying malware, we apply Strand to approximately 500 GB of malware data provided by the Kaggle Microsoft Malware Classification Challenge (BIG 2015) used for predicting nine classes of polymorphic malware. Experiments show that, with minimal adaptation, the method achieves accuracy levels well above 95% requiring only a fraction of the training times used by the winning team’s method.",
"title": ""
},
{
"docid": "194c1a9a16ee6dad00c41544fca74371",
"text": "Computers are not (yet?) capable of being reasonable any more than is a Second Lieutenant. Against stupidity, the Gods themselves contend in vain. Banking systems include the back-end bookkeeping systems that record customers' account details and transaction processing systems such as cash machine networks and high-value interbank money transfer systems that feed them with data. They are important for a number of reasons. First, bookkeeping was for many years the main business of the computer industry, and banking was its most intensive area of application. Personal applications such as Netscape and Powerpoint might now run on more machines, but accounting is still the critical application for the average business. So the protection of bookkeeping systems is of great practical importance. It also gives us a well-understood model of protection in which confidentiality plays almost no role, but where the integrity of records (and their immutability once made) is of paramount importance. Second, transaction processing systems—whether for small debits such as $50 cash machine withdrawals or multimillion-dollar wire transfers—were the applications that launched commercial cryptography. Banking applications drove the development not just of encryption algorithms and protocols, but also of the supporting technologies, such as tamper-resistant cryptographic processors. These processors provide an important and interesting example of a trusted computing base that is quite different from",
"title": ""
},
{
"docid": "094f1e41fde1392cbdc3e1956cf2fc53",
"text": "This paper investigates the characteristics of the active and reactive power sharing in a parallel inverters system under different system impedance conditions. The analyses conclude that the conventional droop method cannot achieve efficient power sharing for the case of a system with complex impedance condition. To achieve the proper power balance and minimize the circulating current in the different impedance situations, a novel droop controller that considers the impact of complex impedance is proposed in this paper. This controller can simplify the coupled active and reactive power relationships, which are caused by the complex impedance in the parallel system. In addition, a virtual complex impedance loop is included in the proposed controller to minimize the fundamental and harmonic circulating current that flows in the parallel system. Compared to the other methods, the proposed controller can achieve accurate power sharing, offers efficient dynamic performance, and is more adaptive to different line impedance situations. Simulation and experimental results are presented to prove the validity and the improvements achieved by the proposed controller.",
"title": ""
},
{
"docid": "651ddcbc6d514da005d0d4319a325e96",
"text": "Convolutional Neural Networks (CNNs) have recently demonstrated a superior performance in computer vision applications; including image retrieval. This paper introduces a bilinear CNN-based model for the first time in the context of Content-Based Image Retrieval (CBIR). The proposed architecture consists of two feature extractors using a pre-trained deep CNN model fine-tuned for image retrieval task to generate a Compact Root Bilinear CNN (CRB-CNN) architecture. Image features are directly extracted from the activations of convolutional layers then pooled at image locations. Additionally, the output size of bilinear features is largely reduced to a compact but high descriminative image representation using kernal-based low-dimensional projection and pooling, which is a fundamental improvement in the retrieval performance in terms of search speed and memory size. An end-to-end training is applied by back-probagation to learn the parameters of the final CRB-CNN. Experimental results reported on the standard Holidays image dataset show the efficiency of the architecture at extracting and learning even complex features for CBIR tasks. Specifically, using a vector of 64-dimension, it achieves 95.13% mAP accuracy and outperforms the best results of state-of-the-art approaches.",
"title": ""
},
{
"docid": "d75d453181293c92ec9bab800029e366",
"text": "For a majority of applications implemented today, the Intermediate Bus Architecture (IBA) has been the preferred power architecture. This power architecture has led to the development of the isolated, semi-regulated DC/DC converter known as the Intermediate Bus Converter (IBC). Fixed ratio Bus Converters that employ a new power topology known as the Sine Amplitude Converter (SAC) offer dramatic improvements in power density, noise reduction, and efficiency over the existing IBC products. As electronic systems continue to trend toward lower voltages with higher currents and as the speed of contemporary loads - such as state-of-the-art processors and memory - continues to increase, the power systems designer is challenged to provide small, cost effective and efficient solutions that offer the requisite performance. Traditional power architectures cannot, in the long run, provide the required performance. Vicor's Factorized Power Architecture (FPA), and the implementation of V·I Chips, provides a revolutionary new and optimal power conversion solution that addresses the challenge in every respect. The technology behind these power conversion engines used in the IBC and V·I Chips is analyzed and contextualized in a system perspective.",
"title": ""
},
{
"docid": "73357505fc60f78d8457b5348b469688",
"text": "In this paper we describe the participation of the CUNY-BLENDER team in the Temporal Slot Filling (TSF) pilot task organized as part of the TAC-KBP2010 evaluation. Our team submitted results for both the “diagnostic” and “full” TSF subtasks, obtaining the top score in the diagnostic subtask. We implemented a “structured” and a “flat” approach to the classification of temporal expressions. The structured approach captures long syntactic contexts surrounding the query entity, slot fill and temporal expression using a dependency path kernel tailored to this task. The flat approach exploits information such as the lexical context and shallow dependency features. In order to provide enough training data for these classifiers we used a distant supervision approach to automatically generate a large amount of training instances from the Web. This data was further refined by applying logistic regression models for instance relabeling and feature selection methods.",
"title": ""
},
{
"docid": "dcb64355bb122fae6ac390d4a63fae08",
"text": "The initial state of an Unmanned Aerial Vehicle (UAV) system and the relative state of the system, the continuous inputs of each flight unit are piecewise linear by a Control Parameterization and Time Discretization (CPTD) method. The approximation piecewise linearization control inputs are used to substitute for the continuous inputs. In this way, the multi-UAV formation reconfiguration problem can be formulated as an optimal control problem with dynamical and algebraic constraints. With strict constraints and mutual interference, the multi-UAV formation reconfiguration in 3-D space is a complicated problem. The recent boom of bio-inspired algorithms has attracted many researchers to the field of applying such intelligent approaches to complicated optimization problems in multi-UAVs. In this paper, a Hybrid Particle Swarm Optimization and Genetic Algorithm (HPSOGA) is proposed to solve the multi-UAV formation reconfiguration problem, which is modeled as a parameter optimization problem. This new approach combines the advantages of Particle Swarm Optimization (PSO) and Genetic Algorithm (GA), which can find the time-optimal solutions simultaneously. The proposed HPSOGA will also be compared with basic PSO algorithm and the series of experimental results will show that our HPSOGA outperforms PSO in solving multi-UAV formation reconfiguration problem under complicated environments.",
"title": ""
},
{
"docid": "25822c79792325b86a90a477b6e988a1",
"text": "As the social networking sites get more popular, spammers target these sites to spread spam posts. Twitter is one of the most popular online social networking sites where users communicate and interact on various topics. Most of the current spam filtering methods in Twitter focus on detecting the spammers and blocking them. However, spammers can create a new account and start posting new spam tweets again. So there is a need for robust spam detection techniques to detect the spam at tweet level. These types of techniques can prevent the spam in real time. To detect the spam at tweet level, often features are defined, and appropriate machine learning algorithms are applied in the literature. Recently, deep learning methods are showing fruitful results on several natural language processing tasks. We want to use the potential benefits of these two types of methods for our problem. Toward this, we propose an ensemble approach for spam detection at tweet level. We develop various deep learning models based on convolutional neural networks (CNNs). Five CNNs and one feature-based model are used in the ensemble. Each CNN uses different word embeddings (Glove, Word2vec) to train the model. The feature-based model uses content-based, user-based, and n-gram features. Our approach combines both deep learning and traditional feature-based models using a multilayer neural network which acts as a meta-classifier. We evaluate our method on two data sets, one data set is balanced, and another one is imbalanced. The experimental results show that our proposed method outperforms the existing methods.",
"title": ""
},
{
"docid": "cb6704ade47db83a6338e43897d72956",
"text": "Renewable energy sources are essential paths towards sustainable development and CO2 emission reduction. For example, the European Union has set the target of achieving 22% of electricity generation from renewable sources by 2010. However, the extensive use of this energy source is being avoided by some technical problems as fouling and slagging in the surfaces of boiler heat exchangers. Although these phenomena were extensively studied in the last decades in order to optimize the behaviour of large coal power boilers, a simple, general and effective method for fouling control has not been developed. For biomass boilers, the feedstock variability and the presence of new components in ash chemistry increase the fouling influence in boiler performance. In particular, heat transfer is widely affected and the boiler capacity becomes dramatically reduced. Unfortunately, the classical approach of regular sootblowing cycles becomes clearly insufficient for them. Artificial Intelligence (AI) provides new means to undertake this problem. This paper illustrates a methodology based on Neural Networks (NNs) and Fuzzy-Logic Expert Systems to select the moment for activating sootblowing in an industrial biomass boiler. The main aim is to minimize the boiler energy and efficiency losses with a proper sootblowing activation. Although the NN type used in this work is well-known and the Hybrid Systems had been extensively used in the last decade, the excellent results obtained in the use of AI in industrial biomass boilers control with regard to previous approaches makes this work a novelty. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a8858713a7040ce6dd25706c9b72b45c",
"text": "A new type of wearable button antenna for wireless local area network (WLAN) applications is proposed. The antenna is composed of a button with a diameter of circa 16 mm incorporating a patch on top of a dielectric disc. The button is located on top of a textile substrate and a conductive textile ground that are to be incorporated in clothing. The main characteristic feature of this antenna is that it shows two different types of radiation patterns, a monopole type pattern in the 2.4 GHz band for on-body communications and a broadside type pattern in the 5 GHz band for off-body communications. A very high efficiency of about 90% is obtained, which is much higher than similar full textile solutions in the literature. A prototype has been fabricated and measured. The effect of several real-life situations such as a tilted button and bending of the textile ground have been studied. Measurements agree very well with simulations.",
"title": ""
},
{
"docid": "f4c4f721fcbda6a740e45c8052977487",
"text": "We propose a method for improving the unconstrained segmentation of speech into phoneme-like units using deep neural networks. The proposed approach is not dependent on acoustic models or forced alignment, but operates using the acoustic features directly. Previous solutions of this type were plagued by the tendency to hypothesise additional incorrect phoneme boundaries near the phoneme transitions. We show that the application of deep neural networks is able to reduce this over-segmentation substantially, and achieve improved segmentation accuracies. Furthermore, we find that generative pre-training offers an additional benefit.",
"title": ""
},
{
"docid": "132ae7b4d5137ecf5020a7e2501db91b",
"text": "This research aims to combine the mathematical theory of evidence with the rule based logics to refine the predictable output. Integrating Fuzzy Logic and Dempster-Shafer theory is calculated from the similarity of Fuzzy membership function. The novelty aspect of this work is that basic probability assignment is proposed based on the similarity measure between membership function. The similarity between Fuzzy membership function is calculated to get a basic probability assignment. The DempsterShafer mathematical theory of evidence has attracted considerable attention as a promising method of dealing with some of the basic problems arising in combination of evidence and data fusion. DempsterShafer theory provides the ability to deal with ignorance and missing information. The foundation of Fuzzy logic is natural language which can help to make full use of expert information.",
"title": ""
},
{
"docid": "a96f27e15c3bbc60810b73a5de21a06c",
"text": "Illumination always affects image quality seriously in practice. To weaken illumination effect on image quality, this paper proposes an adaptive gamma correction method. First, a mapping between pixel and gamma values is built. The gamma values are then revised using two non-linear functions to prevent image distortion. Experimental results demonstrate that the proposed method performs better in readjusting image illumination condition and improving image quality.",
"title": ""
},
{
"docid": "5b0530f94f476754034c92292e02b390",
"text": "Many seemingly simple questions that individual users face in their daily lives may actually require substantial number of computing resources to identify the right answers. For example, a user may want to determine the right thermostat settings for different rooms of a house based on a tolerance range such that the energy consumption and costs can be maximally reduced while still offering comfortable temperatures in the house. Such answers can be determined through simulations. However, some simulation models as in this example are stochastic, which require the execution of a large number of simulation tasks and aggregation of results to ascertain if the outcomes lie within specified confidence intervals. Some other simulation models, such as the study of traffic conditions using simulations may need multiple instances to be executed for a number of different parameters. Cloud computing has opened up new avenues for individuals and organizations Shashank Shekhar [email protected] Hamzah Abdel-Aziz [email protected] Michael Walker [email protected] Faruk Caglar [email protected] Aniruddha Gokhale [email protected] Xenofon Koutsoukos [email protected] 1 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA with limited resources to obtain answers to problems that hitherto required expensive and computationally-intensive resources. This paper presents SIMaaS, which is a cloudbased Simulation-as-a-Service to address these challenges. We demonstrate how lightweight solutions using Linux containers (e.g., Docker) are better suited to support such services instead of heavyweight hypervisor-based solutions, which are shown to incur substantial overhead in provisioning virtual machines on-demand. Empirical results validating our claims are presented in the context of two",
"title": ""
},
{
"docid": "ddddca65683572ff97f8f878e529b32d",
"text": "Human beliefs have remarkable robustness in the face of disconfirmation. This robustness is often explained as the product of heuristics or motivated reasoning. However, robustness can also arise from purely rational principles when the reasoner has recourse to ad hoc auxiliary hypotheses. Auxiliary hypotheses primarily function as the linking assumptions connecting different beliefs to one another and to observational data, but they can also function as a \"protective belt\" that explains away disconfirmation by absorbing some of the blame. The present article traces the role of auxiliary hypotheses from philosophy of science to Bayesian models of cognition and a host of behavioral phenomena, demonstrating their wide-ranging implications.",
"title": ""
},
{
"docid": "a85d07ae3f19a0752f724b39df5eca2b",
"text": "Despite two decades of intensive research, it remains a challenge to design a practical anonymous two-factor authentication scheme, for the designers are confronted with an impressive list of security requirements (e.g., resistance to smart card loss attack) and desirable attributes (e.g., local password update). Numerous solutions have been proposed, yet most of them are shortly found either unable to satisfy some critical security requirements or short of a few important features. To overcome this unsatisfactory situation, researchers often work around it in hopes of a new proposal (but no one has succeeded so far), while paying little attention to the fundamental question: whether or not there are inherent limitations that prevent us from designing an “ideal” scheme that satisfies all the desirable goals? In this work, we aim to provide a definite answer to this question. We first revisit two foremost proposals, i.e. Tsai et al.'s scheme and Li's scheme, revealing some subtleties and challenges in designing such schemes. Then, we systematically explore the inherent conflicts and unavoidable trade-offs among the design criteria. Our results indicate that, under the current widely accepted adversarial model, certain goals are beyond attainment. This also suggests a negative answer to the open problem left by Huang et al. in 2014. To the best of knowledge, the present study makes the first step towards understanding the underlying evaluation metric for anonymous two-factor authentication, which we believe will facilitate better design of anonymous two-factor protocols that offer acceptable trade-offs among usability, security and privacy.",
"title": ""
},
{
"docid": "e8a2a052078633adbb613e7898428c69",
"text": "Human iris is considered a reliable and accurate modality for biometric recognition due to its unique texture information. However, similar to other biometric modalities, iris recognition systems are also vulnerable to presentation attacks (commonly called spoofing) that attempt to conceal or impersonate identity. Examples of typical iris spoofing attacks are printed iris images, textured contact lenses, and synthetic creation of iris images. It is critical to note that majority of the algorithms proposed in the literature are trained to handle a specific type of spoofing attack. These algorithms usually perform very well on that particular attack. However, in real-world applications, an attacker may perform different spoofing attacks. In such a case, the problem becomes more challenging due to inherent variations in different attacks. In this paper, we focus on a medley of iris spoofing attacks and present a unified framework for detecting such attacks. We propose a novel structural and textural feature based iris spoofing detection framework (DESIST). Multi-order dense Zernike moments are calculated across the iris image which encode variations in structure of the iris image. Local Binary Pattern with Variance (LBPV) is utilized for representing textural changes in a spoofed iris image. The highest classification accuracy of 82.20% is observed by the proposed framework for detecting normal and spoofed iris images on a combined iris spoofing database.",
"title": ""
}
] |
scidocsrr
|
c221b48264b1fea8d920dfbf75f89510
|
Mining actionlet ensemble for action recognition with depth cameras
|
[
{
"docid": "8b51b2ee7385649bc48ba4febe0ec4c3",
"text": "This paper presents a HMM-based methodology for action recogni-tion using star skeleton as a representative descriptor of human posture. Star skeleton is a fast skeletonization technique by connecting from centroid of target object to contour extremes. To use star skeleton as feature for action recognition, we clearly define the fea-ture as a five-dimensional vector in star fashion because the head and four limbs are usually local extremes of human shape. In our proposed method, an action is composed of a series of star skeletons over time. Therefore, time-sequential images expressing human action are transformed into a feature vector sequence. Then the fea-ture vector sequence must be transformed into symbol sequence so that HMM can model the action. We design a posture codebook, which contains representative star skeletons of each action type and define a star distance to measure the similarity between feature vec-tors. Each feature vector of the sequence is matched against the codebook and is assigned to the symbol that is most similar. Conse-quently, the time-sequential images are converted to a symbol posture sequence. We use HMMs to model each action types to be recognized. In the training phase, the model parameters of the HMM of each category are optimized so as to best describe the training symbol sequences. For human action recognition, the model which best matches the observed symbol sequence is selected as the recog-nized category. We implement a system to automatically recognize ten different types of actions, and the system has been tested on real human action videos in two cases. One case is the classification of 100 video clips, each containing a single action type. A 98% recog-nition rate is obtained. The other case is a more realistic situation in which human takes a series of actions combined. An action-series recognition is achieved by referring a period of posture history using a sliding window scheme. The experimental results show promising performance.",
"title": ""
}
] |
[
{
"docid": "b1ba9d65373fc7bd57259fb1fc252298",
"text": "BACKGROUND\nFocus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out.\n\n\nMETHODS\nWe searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed.\n\n\nRESULTS\nWe identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers.\n\n\nCONCLUSIONS\nBased on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these studies may also reflect the lack of clear, evidence-based guidance about deciding on sample size. More empirical research is needed to develop focus group methodology.",
"title": ""
},
{
"docid": "68093a9767aea52026a652813c3aa5fd",
"text": "Conventional capacitively coupled neural recording amplifiers often present a large input load capacitance to the neural signal source and hence take up large circuit area. They suffer due to the unavoidable trade-off between the input capacitance and chip area versus the amplifier gain. In this work, this trade-off is relaxed by replacing the single feedback capacitor with a clamped T-capacitor network. With this simple modification, the proposed amplifier can achieve the same mid-band gain with less input capacitance, resulting in a higher input impedance and a smaller silicon area. Prototype neural recording amplifiers based on this proposal were fabricated in 0.35 μm CMOS, and their performance is reported. The amplifiers occupy smaller area and have lower input loading capacitance compared to conventional neural amplifiers. One of the proposed amplifiers occupies merely 0.056 mm2. It achieves 38.1-dB mid-band gain with 1.6 pF input capacitance, and hence has an effective feedback capacitance of 20 fF. Consuming 6 μW, it has an input referred noise of 13.3 μVrms over 8.5 kHz bandwidth and NEF of 7.87. In-vivo recordings from animal experiments are also demonstrated.",
"title": ""
},
{
"docid": "d98809ba1dd612fb1d73e72cc8b40096",
"text": "Recent advances in functional magnetic resonance imaging (fMRI) data acquisition and processing techniques have made real-time fMRI (rtfMRI) of localized brain areas feasible, reliable and less susceptible to artefacts. Previous studies have shown that healthy subjects learn to control local brain activity with operant training by using rtfMRI-based neurofeedback. In the present study, we investigated whether healthy subjects could voluntarily gain control over right anterior insular activity. Subjects were provided with continuously updated information of the target ROI's level of activation by visual feedback. All participants were able to successfully regulate BOLD-magnitude in the right anterior insular cortex within three sessions of 4 min each. Training resulted in a significantly increased activation cluster in the anterior portion of the right insula across sessions. An increased activity was also found in the left anterior insula but the percent signal change was lower than in the target ROI. Two different control conditions intended to assess the effects of non-specific feedback and mental imagery demonstrated that the training effect was not due to unspecific activations or non feedback-related cognitive strategies. Both control groups showed no enhanced activation across the sessions, which confirmed our main hypothesis that rtfMRI feedback is area-specific. The increased activity in the right anterior insula during training demonstrates that the effects observed are anatomically specific and self-regulation of right anterior insula only is achievable. This is the first group study investigating the volitional control of emotionally relevant brain region by using rtfMRI training and confirms that self-regulation of local brain activity with rtfMRI is possible.",
"title": ""
},
{
"docid": "6d8bd77d78263f6a98b23d1759417d94",
"text": "Implementations of word sense disambiguation (WSD) algorithms tend to be tied to a particular test corpus format and sense inventory. This makes it difficult to test their performance on new data sets, or to compare them against past algorithms implemented for different data sets. In this paper we present DKPro WSD, a freely licensed, general-purpose framework for WSD which is both modular and extensible. DKPro WSD abstracts the WSD process in such a way that test corpora, sense inventories, and algorithms can be freely swapped. Its UIMA-based architecture makes it easy to add support for new resources and algorithms. Related tasks such as word sense induction and entity linking are also supported.",
"title": ""
},
{
"docid": "03c14c8dff455afdaab6fd3ddc4dcc35",
"text": "BACKGROUND\nAdolescents and college students are at high risk for initiating alcohol use and high-risk (or binge) drinking. There is a growing body of literature on neurotoxic and harmful cognitive effects of drinking by young people. On average, youths take their first drink at age 12 years.\n\n\nMETHODS\nMEDLINE search on neurologic and cognitive effects of underage drinking.\n\n\nRESULTS\nProblematic alcohol consumption is not a benign condition that resolves with age. Individuals who first use alcohol before age 14 years are at increased risk of developing alcohol use disorders. Underage drinkers are susceptible to immediate consequences of alcohol use, including blackouts, hangovers, and alcohol poisoning and are at elevated risk of neurodegeneration (particularly in regions of the brain responsible for learning and memory), impairments in functional brain activity, and the appearance of neurocognitive deficits. Heavy episodic or binge drinking impairs study habits and erodes the development of transitional skills to adulthood.\n\n\nCONCLUSIONS\nUnderage alcohol use is associated with brain damage and neurocognitive deficits, with implications for learning and intellectual development. Impaired intellectual development may continue to affect individuals into adulthood. It is imperative for policymakers and organized medicine to address the problem of underage drinking.",
"title": ""
},
{
"docid": "192b4a503a903747caffe5ea03c31c16",
"text": "We analyze and reframe AI progress. In addition to the prevailing metrics of performance, we highlight the usually neglected costs paid in the development and deployment of a system, including: data, expert knowledge, human oversight, software resources, computing cycles, hardware and network facilities, development time, etc. These costs are paid throughout the life cycle of an AI system, fall differentially on different individuals, and vary in magnitude depending on the replicability and generality of the AI solution. The multidimensional performance and cost space can be collapsed to a single utility metric for a user with transitive and complete preferences. Even absent a single utility function, AI advances can be generically assessed by whether they expand the Pareto (optimal) surface. We explore a subset of these neglected dimensions using the two case studies of Alpha* and ALE. This broadened conception of progress in AI should lead to novel ways of measuring success in AI, and can help set milestones for future progress.",
"title": ""
},
{
"docid": "93325e6f1c13889fb2573f4631d021a5",
"text": "The difference between a computer game and a simulator can be a small one both require the same capabilities from the computer: realistic graphics, behavior consistent with the laws of physics, a variety of scenarios where difficulties can emerge, and some assessment technique to inform users of performance. Computer games are a multi-billion dollar industry in the United States, and as the production costs and complexity of games have increased, so has the effort to make their creation easier. Commercial software products have been developed to greatly simpl ify the game-making process, allowing developers to focus on content rather than on programming. This paper investigates Unity3D game creation software for making threedimensional engine-room simulators. Unity3D is arguably the best software product for game creation, and has been used for numerous popular and successful commercial games. Maritime universities could greatly benefit from making custom simulators to fit specific applications and requirements, as well as from reducing the cost of purchasing simulators. We use Unity3D to make a three-dimensional steam turbine simulator that achieves a high degree of realism. The user can walk around the turbine, open and close valves, activate pumps, and run the turbine. Turbine operating parameters such as RPM, condenser vacuum, lube oil temperature. and governor status are monitored. In addition, the program keeps a log of any errors made by the operator. We find that with the use of Unity3D, students and faculty are able to make custom three-dimensional ship and engine room simulators that can be used as training and evaluation tools.",
"title": ""
},
{
"docid": "ce786570fc3565145d980a4c53c3d292",
"text": "Existing digital hearing aids, to our knowledge, all exclude ANSI S1.11-compliant filter banks because of the high computational complexity. Most ANSI S1.11 designs are IIR- based and only applicable in applications where linear phase is not important. This paper presents an FIR-based ANSI S1.11 filter bank for digital hearing aids, which adopts a multi-rate architecture to reduce the data rates on the bandwidth-limited bands. A systematic way is also proposed to minimize the FIR orders thereof. In an 18-band digital hearing aid with 24 kHz input sampling rate, the proposed design with linear phase has comparable computational complexity with IIR filter banks. Moreover, our design requires only 4% multiplications and additions of a straightforward FIR implementation.",
"title": ""
},
{
"docid": "3790ec7f10c014fa56d3890060ed8bce",
"text": "Since LCL filter has smaller inductance value comparing to L type filter with the same performance in harmonic suppression. it is gradually used in high-power and low-frequency current-source-controlled grid-connected converters. However design of LCL filter's parameter not only relates switch frequency ripple attenuation, but also impacts on performance of grid-connected current controller. This paper firstly introduced a harmonic model of LCL filter in grid-connected operation, then researched the variable relationship among LCL filter's parameter and resonance frequency and high-frequency ripple attenuation. Based on above analysis a reasonable design method was brought out in order to achieve optimal effect under the precondition of saving inductance magnetic core of LCL filter, at the same time guaranteeing the resonance frequency of LCL filter was not too small lest restrict current controller resign. Finally this design method was verified by the experimental results.",
"title": ""
},
{
"docid": "522e384f4533ca656210561be9afbdab",
"text": "Every software program that interacts with a user requires a user interface. Model-View-Controller (MVC) is a common design pattern to integrate a user interface with the application domain logic. MVC separates the representation of the application domain (Model) from the display of the application's state (View) and user interaction control (Controller). However, studying the literature reveals that a variety of other related patterns exists, which we denote with Model-View- (MV) design patterns. This paper discusses existing MV patterns classified in three main families: Model-View-Controller (MVC), Model-View-View Model (MVVM), and Model-View-Presenter (MVP). We take a practitioners' point of view and emphasize the essentials of each family as well as the differences. The study shows that the selection of patterns should take into account the use cases and quality requirements at hand, and chosen technology. We illustrate the selection of a pattern with an example of our practice. The study results aim to bring more clarity in the variety of MV design patterns and help practitioners to make better grounded decisions when selecting patterns.",
"title": ""
},
{
"docid": "f262aba2003f986012bbec1a9c2fcb83",
"text": "Hemiplegic migraine is a rare form of migraine with aura that involves motor aura (weakness). This type of migraine can occur as a sporadic or a familial disorder. Familial forms of hemiplegic migraine are dominantly inherited. Data from genetic studies have implicated mutations in genes that encode proteins involved in ion transportation. However, at least a quarter of the large families affected and most sporadic cases do not have a mutation in the three genes known to be implicated in this disorder, suggesting that other genes are still to be identified. Results from functional studies indicate that neuronal hyperexcitability has a pivotal role in the pathogenesis of hemiplegic migraine. The clinical manifestations of hemiplegic migraine range from attacks with short-duration hemiparesis to severe forms with recurrent coma and prolonged hemiparesis, permanent cerebellar ataxia, epilepsy, transient blindness, or mental retardation. Diagnosis relies on a careful patient history and exclusion of potential causes of symptomatic attacks. The principles of management are similar to those for common varieties of migraine, except that vasoconstrictors, including triptans, are historically contraindicated but are often used off-label to stop the headache, and prophylactic treatment can include lamotrigine and acetazolamide.",
"title": ""
},
{
"docid": "0a0cc3c3d3cd7e7c3e8b409554daa5a3",
"text": "Purpose: We investigate the extent of voluntary disclosures in UK higher education institutions’ (HEIs) annual reports and examine whether internal governance structures influence disclosure in the period following major reform and funding constraints. Design/methodology/approach: We adopt a modified version of Coy and Dixon’s (2004) public accountability index, referred to in this paper as a public accountability and transparency index (PATI), to measure the extent of voluntary disclosures in 130 UK HEIs’ annual reports. Informed by a multitheoretical framework drawn from public accountability, legitimacy, resource dependence and stakeholder perspectives, we propose that the characteristics of governing and executive structures in UK universities influence the extent of their voluntary disclosures. Findings: We find a large degree of variability in the level of voluntary disclosures by universities and an overall relatively low level of PATI (44%), particularly with regards to the disclosure of teaching/research outcomes. We also find that audit committee quality, governing board diversity, governor independence, and the presence of a governance committee are associated with the level of disclosure. Finally, we find that the interaction between executive team characteristics and governance variables enhances the level of voluntary disclosures, thereby providing support for the continued relevance of a ‘shared’ leadership in the HEIs’ sector towards enhancing accountability and transparency in HEIs. Research limitations/implications: In spite of significant funding cuts, regulatory reforms and competitive challenges, the level of voluntary disclosure by UK HEIs remains low. Whilst the role of selected governance mechanisms and ‘shared leadership’ in improving disclosure, is asserted, the varying level and selective basis of the disclosures across the surveyed HEIs suggest that the public accountability motive is weaker relative to the other motives underpinned by stakeholder, legitimacy and resource dependence perspectives. Originality/value: This is the first study which explores the association between HEI governance structures, managerial characteristics and the level of disclosure in UK HEIs.",
"title": ""
},
{
"docid": "c34b474b06d21d1bebdcb8a37b8470c5",
"text": "Using machine learning to analyze data often results in developer exhaust – code, logs, or metadata that do not de ne the learning algorithm but are byproducts of the data analytics pipeline. We study how the rich information present in developer exhaust can be used to approximately solve otherwise complex tasks. Speci cally, we focus on using log data associated with training deep learning models to perform model search by predicting performance metrics for untrainedmodels. Instead of designing a di erent model for each performance metric, we present two preliminary methods that rely only on information present in logs to predict these characteristics for di erent architectures. We introduce (i) a nearest neighbor approachwith a hand-crafted edit distancemetric to comparemodel architectures and (ii) a more generalizable, end-to-end approach that trains an LSTM using model architectures and associated logs to predict performancemetrics of interest.We performmodel search optimizing for best validation accuracy, degree of over tting, and best validation accuracy given a constraint on training time. Our approaches can predict validation accuracy within 1.37% error on average, while the baseline achieves 4.13% by using the performance of a trainedmodel with the closest number of layers.When choosing the best performing model given constraints on training time, our approaches select the top-3 models that overlap with the true top3 models 82% of the time, while the baseline only achieves this 54% of the time. Our preliminary experiments hold promise for how developer exhaust can help learnmodels that can approximate various complex tasks e ciently. ACM Reference Format: Jian Zhang, Max Lam, Stephanie Wang, Paroma Varma, Luigi Nardi, Kunle Olukotun, Christopher Ré. 2018. Exploring the Utility of Developer Exhaust. In DEEM’18: International Workshop on Data Management for End-to-End Machine Learning, June 15, 2018, Houston, TX, USA.",
"title": ""
},
{
"docid": "544cfa381dad24a53a31e368e10d8f75",
"text": "Several previous works have shown that TCP exhibits poor performance in mobile ad hoc networks (MANETs). The ultimate reason for this is that MANETs behave in a significantly different way from traditional wired networks, like the Internet, for which TCP was originally designed. In this paper we propose a novel transport protocol - named TPA - specifically tailored to the characteristics of the MANET environment. It is based on a completely new congestion control mechanism, and designed in such a way to minimize the number of useless transmissions and, hence, power consumption. Furthermore, it is able to manage efficiently route changes and route failures. We evaluated the TPA protocol in a static scenario where TCP exhibits good performance. Simulation results show that, even in such a scenario, TPA significantly outperforms TCP.",
"title": ""
},
{
"docid": "748b470bfbd62b5ddf747e3ef989e66d",
"text": "Purpose – This paper sets out to integrate research on knowledge management with the dynamic capabilities approach. This paper will add to the understanding of dynamic capabilities by demonstrating that dynamic capabilities can be seen as composed of concrete and well-known knowledge management activities. Design/methodology/approach – This paper is based on a literature review focusing on key knowledge management processes and activities as well as the concept of dynamic capabilities, the paper connects these two approaches. The analysis is centered on knowledge management activities which then are compiled into dynamic capabilities. Findings – In the paper eight knowledge management activities are identified; knowledge creation, acquisition, capture, assembly, sharing, integration, leverage, and exploitation. These activities are assembled into the three dynamic capabilities of knowledge development, knowledge (re)combination, and knowledge use. The dynamic capabilities and the associated knowledge management activities create flows to and from the firm’s stock of knowledge and they support the creation and use of organizational capabilities. Practical implications – The findings in the paper demonstrate that the somewhat elusive concept of dynamic capabilities can be untangled through the use of knowledge management activities. Practicing managers struggling with the operationalization of dynamic capabilities should instead focus on the contributing knowledge management activities in order to operationalize and utilize the concept of dynamic capabilities. Originality/value – The paper demonstrates that the existing research on knowledge management can be a key contributor to increasing our understanding of dynamic capabilities. This finding is valuable for both researchers and practitioners.",
"title": ""
},
{
"docid": "5fc3da9b59e9a2a7c26fa93445c68933",
"text": "A country's growth is strongly measured by quality of its education system. Education sector, across the globe has witnessed sea change in its functioning. Today it is recognized as an industry and like any other industry it is facing challenges, the major challenges of higher education being decrease in students' success rate and their leaving a course without completion. An early prediction of students' failure may help the management provide timely counseling as well coaching to increase success rate and student retention. We use different classification techniques to build performance prediction model based on students' social integration, academic integration, and various emotional skills which have not been considered so far. Two algorithms J48 (Implementation of C4.5) and Random Tree have been applied to the records of MCA students of colleges affiliated to Guru Gobind Singh Indraprastha University to predict third semester performance. Random Tree is found to be more accurate in predicting performance than J48 algorithm.",
"title": ""
},
{
"docid": "a936f3ea3a168c959c775dbb50a5faf2",
"text": "From the Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts. Address correspondence to Dr. Schmahmann, Department of Neurology, VBK 915, Massachusetts General Hospital, Fruit St., Boston, MA 02114; [email protected] (E-mail). Copyright 2004 American Psychiatric Publishing, Inc. Disorders of the Cerebellum: Ataxia, Dysmetria of Thought, and the Cerebellar Cognitive Affective Syndrome",
"title": ""
},
{
"docid": "9eabe9a867edbceee72bd20d483ad886",
"text": "Inspired by recent advances of deep learning in instance segmentation and object tracking, we introduce the concept of convnet-based guidance applied to video object segmentation. Our model proceeds on a per-frame basis, guided by the output of the previous frame towards the object of interest in the next frame. We demonstrate that highly accurate object segmentation in videos can be enabled by using a convolutional neural network (convnet) trained with static images only. The key component of our approach is a combination of offline and online learning strategies, where the former produces a refined mask from the previous frame estimate and the latter allows to capture the appearance of the specific object instance. Our method can handle different types of input annotations such as bounding boxes and segments while leveraging an arbitrary amount of annotated frames. Therefore our system is suitable for diverse applications with different requirements in terms of accuracy and efficiency. In our extensive evaluation, we obtain competitive results on three different datasets, independently from the type of input annotation.",
"title": ""
},
{
"docid": "06731beb8a4563ed89338b4cba88d1df",
"text": "It has been almost five years since the ISO adopted a standard for measurement of image resolution of digital still cameras using slanted-edge gradient analysis. The method has also been applied to the spatial frequency response and MTF of film and print scanners, and CRT displays. Each of these applications presents challenges to the use of the method. Previously, we have described causes of both bias and variation error in terms of the various signal processing steps involved. This analysis, when combined with observations from practical systems testing, has suggested improvements and interpretation of results. Specifically, refinements in data screening for signal encoding problems, edge feature location and slope estimation, and noise resilience will be addressed.",
"title": ""
}
] |
scidocsrr
|
9460f28f7e58f1d6a0f066bfccd32179
|
An intelligent discussion-bot for answering student queries in threaded discussions
|
[
{
"docid": "50d0b1e141bcea869352c9b96b0b2ad5",
"text": "In this paper we present the features of a Question/Answering (Q/A) system that had unparalleled performance in the TREC-9 evaluations. We explain the accuracy of our system through the unique characteristics of its architecture: (1) usage of a wide-coverage answer type taxonomy; (2) repeated passage retrieval; (3) lexico-semantic feedback loops; (4) extraction of the answers based on machine learning techniques; and (5) answer caching. Experimental results show the effects of each feature on the overall performance of the Q/A system and lead to general conclusions about Q/A from large text collections.",
"title": ""
}
] |
[
{
"docid": "bd7664e9ff585a48adca12c0a8d9bf95",
"text": "Fueled by the widespread adoption of sensor-enabled smartphones, mobile crowdsourcing is an area of rapid innovation. Many crowd-powered sensor systems are now part of our daily life -- for example, providing highway congestion information. However, participation in these systems can easily expose users to a significant drain on already limited mobile battery resources. For instance, the energy burden of sampling certain sensors (such as WiFi or GPS) can quickly accumulate to levels users are unwilling to bear. Crowd system designers must minimize the negative energy side-effects of participation if they are to acquire and maintain large-scale user populations.\n To address this challenge, we propose Piggyback CrowdSensing (PCS), a system for collecting mobile sensor data from smartphones that lowers the energy overhead of user participation. Our approach is to collect sensor data by exploiting Smartphone App Opportunities -- that is, those times when smartphone users place phone calls or use applications. In these situations, the energy needed to sense is lowered because the phone need no longer be woken from an idle sleep state just to collect data. Similar savings are also possible when the phone either performs local sensor computation or uploads the data to the cloud. To efficiently use these sporadic opportunities, PCS builds a lightweight, user-specific prediction model of smartphone app usage. PCS uses this model to drive a decision engine that lets the smartphone locally decide which app opportunities to exploit based on expected energy/quality trade-offs.\n We evaluate PCS by analyzing a large-scale dataset (containing 1,320 smartphone users) and building an end-to-end crowdsourcing application that constructs an indoor WiFi localization database. Our findings show that PCS can effectively collect large-scale mobile sensor datasets (e.g., accelerometer, GPS, audio, image) from users while using less energy (up to 90% depending on the scenario) compared to a representative collection of existing approaches.",
"title": ""
},
{
"docid": "5824a316f20751183676850c119c96cd",
"text": " Proposed method – Max-RGB & Gray-World • Instantiations of Minkowski norm – Optimal illuminant estimate • L6 norm: Working best overall",
"title": ""
},
{
"docid": "eb3fad94acaf1f36783fdb22f3932ec7",
"text": "This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.",
"title": ""
},
{
"docid": "cbcb20173f4e012253c51020932e75a6",
"text": "We investigate methods for combining multiple selfsupervised tasks—i.e., supervised tasks where data can be collected without manual labeling—in order to train a single visual representation. First, we provide an apples-toapples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for “harmonizing” network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks—even via a na¨ýve multihead architecture—always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.",
"title": ""
},
{
"docid": "082894a8498a5c22af8903ad8ea6399a",
"text": "Despite the proliferation of mobile health applications, few target low literacy users. This is a matter of concern because 43% of the United States population is functionally illiterate. To empower everyone to be a full participant in the evolving health system and prevent further disparities, we must understand the design needs of low literacy populations. In this paper, we present two complementary studies of four graphical user interface (GUI) widgets and three different cross-page navigation styles in mobile applications with a varying literacy, chronically-ill population. Participant's navigation and interaction styles were documented while they performed search tasks using high fidelity prototypes running on a mobile device. Results indicate that participants could use any non-text based GUI widgets. For navigation structures, users performed best when navigating a linear structure, but preferred the features of cross-linked navigation. Based on these findings, we provide some recommendations for designing accessible mobile applications for varying-literacy populations.",
"title": ""
},
{
"docid": "ecc31d1d7616e014a3a032d14e149e9b",
"text": "It has been proposed that sexual stimuli will be processed in a comparable manner to other evolutionarily meaningful stimuli (such as spiders or snakes) and therefore elicit an attentional bias and more attentional engagement (Spiering and Everaerd, In E. Janssen (Ed.), The psychophysiology of sex (pp. 166-183). Bloomington: Indiana University Press, 2007). To investigate early and late attentional processes while looking at sexual stimuli, heterosexual men (n = 12) viewed pairs of sexually preferred (images of women) and sexually non-preferred images (images of girls, boys or men), while eye movements were measured. Early attentional processing (initial orienting) was assessed by the number of first fixations and late attentional processing (maintenance of attention) was assessed by relative fixation time. Results showed that relative fixation time was significantly longer for sexually preferred stimuli than for sexually non-preferred stimuli. Furthermore, the first fixation was more often directed towards the preferred sexual stimulus, when simultaneously presented with a non-sexually preferred stimulus. Thus, the current study showed for the first time an attentional bias to sexually relevant stimuli when presented simultaneously with sexually irrelevant pictures. This finding, along with the discovery that heterosexual men maintained their attention to sexually relevant stimuli, highlights the importance of investigating early and late attentional processes while viewing sexual stimuli. Furthermore, the current study showed that sexually relevant stimuli are favored by the human attentional system.",
"title": ""
},
{
"docid": "031562142f7a2ffc64156f9d09865604",
"text": "The demand for video content is continuously increasing as video sharing on the Internet is becoming enormously popular recently. This demand, with its high bandwidth requirements, has a considerable impact on the load of the network infrastructure. As more users access videos from their mobile devices, the load on the current wireless infrastructure (which has limited capacity) will be even more significant. Based on observations from many local video sharing scenarios, in this paper, we study the tradeoffs of using Wi-Fi ad-hoc mode versus infrastructure mode for video streaming between adjacent devices. We thus show the potential of direct device-to-device communication as a way to reduce the load on the wireless infrastructure and to improve user experiences. Setting up experiments for WiFi devices connected in ad-hoc mode, we collect measurements for various video streaming scenarios and compare them to the case where the devices are connected through access points. The results show the improvements in latency, jitter and loss rate. More importantly, the results show that the performance in direct device-to-device streaming is much more stable in contrast to the access point case, where different factors affect the performance causing widely unpredictable qualities.",
"title": ""
},
{
"docid": "d2539660125472b260507fa938432980",
"text": "This paper reports on one cycle of a design-based research (DBR) study in which mCSCL was explored through an iterative process of (re)designing and testing the collaboration and learning approach with students. A unique characteristic of our mCSCL approach is the student-led emergent formation of groups. The mCSCL application assigns each student a component of a Chinese character and requires them to form groups that can assemble a Chinese character using the components held by the group members. The enactment of the learning design in two modes (with and without the digital technology) was observed, and the actual process of students being scaffolded technologically or socially to accomplish their task was analyzed. Students were found to favor the card mode over the phone mode due to the emergent game strategy (social scaffold) of “trial and error” that they found it comfortable in applying. That triggered us to examine the scaffolding strategies by conducting another round of literature review. We explored domainoriented theories (i.e. in Chinese character learning) to inform and guide them in deciding how they should further accommodate or rectify the students‟ use of the strategy. This cycle of DBR in Chinese-PP project has effectively reshaped the overall learning model design. This paper brings to the fore the value of the interplay and iterations of theories, implementations and reflections, in no fixed order, as advocated by DBR.",
"title": ""
},
{
"docid": "f1d0fc62f47c5fd4f47716a337fd9ed0",
"text": "We present the system architecture of a mobile outdoor augmented reality system for the Archeoguide project. We begin with a short introduction to the project. Then we present the hardware we chose for the mobile system and we describe the system architecture we designed for the software implementation. We conclude this paper with the first results obtained from experiments we made during our trials at ancient Olympia in Greece.",
"title": ""
},
{
"docid": "a6889a6dd3dbdc4488ba01653acbc386",
"text": "OBJECTIVE\nTo succinctly summarise five contemporary theories about motivation to learn, articulate key intersections and distinctions among these theories, and identify important considerations for future research.\n\n\nRESULTS\nMotivation has been defined as the process whereby goal-directed activities are initiated and sustained. In expectancy-value theory, motivation is a function of the expectation of success and perceived value. Attribution theory focuses on the causal attributions learners create to explain the results of an activity, and classifies these in terms of their locus, stability and controllability. Social- cognitive theory emphasises self-efficacy as the primary driver of motivated action, and also identifies cues that influence future self-efficacy and support self-regulated learning. Goal orientation theory suggests that learners tend to engage in tasks with concerns about mastering the content (mastery goal, arising from a 'growth' mindset regarding intelligence and learning) or about doing better than others or avoiding failure (performance goals, arising from a 'fixed' mindset). Finally, self-determination theory proposes that optimal performance results from actions motivated by intrinsic interests or by extrinsic values that have become integrated and internalised. Satisfying basic psychosocial needs of autonomy, competence and relatedness promotes such motivation. Looking across all five theories, we note recurrent themes of competence, value, attributions, and interactions between individuals and the learning context.\n\n\nCONCLUSIONS\nTo avoid conceptual confusion, and perhaps more importantly to maximise the theory-building potential of their work, researchers must be careful (and precise) in how they define, operationalise and measure different motivational constructs. We suggest that motivation research continue to build theory and extend it to health professions domains, identify key outcomes and outcome measures, and test practical educational applications of the principles thus derived.",
"title": ""
},
{
"docid": "36190ca28bff2390c9037404bda2cd5f",
"text": "In this paper we propose an approach to modeling syntactically-motivated skeletal structure of source sentence for machine translation. This model allows for application of high-level syntactic transfer rules and low-level non-syntactic rules. It thus involves fully syntactic, non-syntactic, and partially syntactic derivations via a single grammar and decoding paradigm. On large-scale Chinese-English and EnglishChinese translation tasks, we obtain an average improvement of +0.9 BLEU across the newswire and web genres.",
"title": ""
},
{
"docid": "fcacf1a443252652dfec05f7061784e1",
"text": "Small point lights (e.g., LEDs) are used as indicators in a wide variety of devices today, from digital watches and toasters, to washing machines and desktop computers. Although exceedingly simple in their output - varying light intensity over time - their design space can be rich. Unfortunately, a survey of contemporary uses revealed that the vocabulary of lighting expression in popular use today is small, fairly unimaginative, and generally ambiguous in meaning. In this paper, we work through a structured design process that points the way towards a much richer set of expressive forms and more effective communication for this very simple medium. In this process, we make use of five different data gathering and evaluation components to leverage the knowledge, opinions and expertise of people outside our team. Our work starts by considering what information is typically conveyed in this medium. We go on to consider potential expressive forms -- how information might be conveyed. We iteratively refine and expand these sets, concluding with ideas gathered from a panel of designers. Our final step was to make use of thousands of human judgments, gathered in a crowd-sourced fashion (265 participants), to measure the suitability of different expressive forms for conveying different information content. This results in a set of recommended light behaviors that mobile devices, such as smartphones, could readily employ.",
"title": ""
},
{
"docid": "159042b301627086b95c4a3374c2083c",
"text": "To achieve a low computational cost when performing online metric learning for large-scale data, we present a one-pass closed-form solution namely OPML in this paper. Typically, the proposed OPML first adopts a onepass triplet construction strategy, which aims to use only a very small number of triplets to approximate the representation ability of whole original triplets obtained by batch-manner methods. Then, OPML employs a closed-form solution to update the metric for new coming samples, which leads to a low space (i.e., O(d)) and time (i.e., O(d 2)) complexity, where d is the feature dimensionality. In addition, an extension of OPML (namely COPML) is further proposed to enhance the robustness when in real case the first several samples come from the same class (i.e., cold start problem). In the experiments, we have systematically evaluated our methods (OPML and COPML) on three typical tasks, including UCI data classification, face verification, and abnormal event detection in videos, which aims to fully evaluate the proposed methods on different sample number, different feature dimensionalities and different feature extraction ways (i.e., hand-crafted and deeplylearned). The results show that OPML and COPML can obtain the promising performance with a very low computational cost. Also, the effectiveness of COPML under the cold start setting is experimentally verified. Disciplines Engineering | Science and Technology Studies Publication Details Li, W., Gao, Y., Wang, L., Zhou, L., Huo, J. & Shi, Y. (2017). OPML: A one-pass closed-form solution for online metric learning. Pattern Recognition, 75 302-31 Authors Wenbin Li, Yang Gao, Lei Wang, Luping Zhou, Jing Huo, and Yinghuan Shi This journal article is available at Research Online: http://ro.uow.edu.au/eispapers1/152 ar X iv :1 60 9. 09 17 8v 1 [c s. LG ] 29 S ep 2 01 6 1 OPML: A One-Pass Closed-Form Solution for Online Metric Learning Wenbin Li 1, Yang Gao1, Lei Wang2, Luping Zhou2, Jing Huo1, and Yinghuan Shi 1 1National Key Laboratory for Novel Software Technology, Nan jing University, China 2School of Computing and Information Technology, Universit y of Wollongong, Australia Abstract—To achieve a low computational cost when performing online metric learning for large-scale data, we present a onepass closed-form solution namely OPML in this paper. Typically, the proposed OPML first adopts a one-pass triplet constructi on strategy, which aims to use only a very small number of triplets to approximate the representation ability of whole original triplets obtained by batch-manner methods. Then, OPML employs a closed-form solution to update the metric for new coming samples, which leads to a low space (i.e., O(d)) and time (i.e.,O(d)) complexity, where d is the feature dimensionality. In addition, an extension of OPML (namely COPML) is further proposed to enhance the robustness when in real case the first several samples come from the same class (i.e., cold start problem). In the experiments, we have systematically evaluated our meth ods (OPML and COPML) on three typical tasks, including UCI data classification, face verification, and abnormal event detec tion in videos, which aims to fully evaluate the proposed methods on different sample number, different feature dimensionalities and different feature extraction ways (i.e., hand-crafted anddeeplylearned). The results show that OPML and COPML can obtain the promising performance with a very low computational cost. Also, the effectiveness of COPML under the cold start settin g is experimentally verified.",
"title": ""
},
{
"docid": "7095bf529a060dd0cd7eeb2910998cf8",
"text": "The proliferation of internet along with the attractiveness of the web in recent years has made web mining as the research area of great magnitude. Web mining essentially has many advantages which makes this technology attractive to researchers. The analysis of web user’s navigational pattern within a web site can provide useful information for applications like, server performance enhancements, restructuring a web site, direct marketing in ecommerce etc. The navigation paths may be explored based on some similarity criteria, in order to get the useful inference about the usage of web. The objective of this paper is to propose an effective clustering technique to group users’ sessions by modifying K-means algorithm and suggest a method to compute the distance between sessions based on similarity of their web access path, which takes care of the issue of the user sessions that are of variable",
"title": ""
},
{
"docid": "4419d61684dff89f4678afe3b8dc06e0",
"text": "Reason and emotion have long been considered opposing forces. However, recent psychological and neuroscientific research has revealed that emotion and cognition are closely intertwined. Cognitive processing is needed to elicit emotional responses. At the same time, emotional responses modulate and guide cognition to enable adaptive responses to the environment. Emotion determines how we perceive our world, organise our memory, and make important decisions. In this review, we provide an overview of current theorising and research in the Affective Sciences. We describe how psychological theories of emotion conceptualise the interactions of cognitive and emotional processes. We then review recent research investigating how emotion impacts our perception, attention, memory, and decision-making. Drawing on studies with both healthy participants and clinical populations, we illustrate the mechanisms and neural substrates underlying the interactions of cognition and emotion.",
"title": ""
},
{
"docid": "c84a0f630b4fb2e547451d904e1c63a5",
"text": "Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on “informative” examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the persample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.",
"title": ""
},
{
"docid": "16b64bf865bae192b604faaf6f916ff1",
"text": "Recurrent Neural Networks (RNNs) have obtained excellent result in many natural language processing (NLP) tasks. However, understanding and interpreting the source of this success remains a challenge. In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data. We demonstrate the power of RMN on language modeling and sentence completion tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM) network on three large German, Italian, and English dataset. Additionally we perform indepth analysis of various linguistic dimensions that RMN captures. On Sentence Completion Challenge, for which it is essential to capture sentence coherence, our RMN obtains 69.2% accuracy, surpassing the previous state of the art by a large margin.1",
"title": ""
},
{
"docid": "e16a2528e64885b363eb787f5f9440c2",
"text": "We propose to use text recognition to aid in visual object class recognition. To this end we first propose a new algorithm for text detection in natural images. The proposed text detection is based on saliency cues and a context fusion step. The algorithm does not need any parameter tuning and can deal with varying imaging conditions. We evaluate three different tasks: 1. Scene text recognition, where we increase the state-of-the-art by 0.17 on the ICDAR 2003 dataset. 2. Saliency based object recognition, where we outperform other state-of-the-art saliency methods for object recognition on the PASCAL VOC 2011 dataset. 3. Object recognition with the aid of recognized text, where we are the first to report multi-modal results on the IMET set. Results show that text helps for object class recognition if the text is not uniquely coupled to individual object instances.",
"title": ""
},
{
"docid": "a3cb5c10747f21667ec525df93cc3f01",
"text": "With the success of deep learning based approaches in tackling challenging problems in computer vision, a wide range of deep architectures have recently been proposed for the task of visual odometry (VO) estimation. Most of these proposed solutions rely on supervision, which requires the acquisition of precise ground-truth camera pose information, collected using expensive motion capture systems or high-precision IMU/GPS sensor rigs. In this work, we propose an unsupervised paradigm for deep visual odometry learning. We show that using a noisy teacher, which could be a standard VO pipeline, and by designing a loss term that enforces geometric consistency of the trajectory, we can train accurate deep models for VO that do not require ground-truth labels. We leverage geometry as a self-supervisory signal and propose \"Composite Transformation Constraints (CTCs)\", that automatically generate supervisory signals for training and enforce geometric consistency in the VO estimate. We also present a method of characterizing the uncertainty in VO estimates thus obtained. To evaluate our VO pipeline, we present exhaustive ablation studies that demonstrate the efficacy of end-to-end, self-supervised methodologies to train deep models for monocular VO. We show that leveraging concepts from geometry and incorporating them into the training of a recurrent neural network results in performance competitive to supervised deep VO methods.",
"title": ""
},
{
"docid": "67cc93383c8bf7bddcbf27ecd6b7103e",
"text": "Ridesharing platforms use dynamic pricing as a means to control the network's supply and demand at different locations and times (e.g., Lyft's Prime Time and Uber's Surge Pricing) to increase revenue. These algorithms only consider the network's current supply and demand only at a ride's origin to adjust the price of the ride. In this work, we show how we can increase the platform's revenue while lowering the prices as compared to state-of-the-art algorithms, by considering the network's future demand. Furthermore, we show if rather than setting the price of a ride only based on the supply and demand at its origin, we use predictive supply and demand at both the ride's origin and destination, we can further increase the platform's overall revenue. Using a real-world data set from New York City, we show our pricing method can increase the revenue by up to 15% while reducing the price of the rides by an average of 5%. Furthermore, we show that our methods are resilient to up to 25% error in future demand prediction.",
"title": ""
}
] |
scidocsrr
|
a2a7b12b8a08fbcd25fe20136bc79e98
|
Learning to Rank Query Graphs for Complex Question Answering over Knowledge Graphs
|
[
{
"docid": "6e4f0a770fe2a34f99957f252110b6bd",
"text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.",
"title": ""
},
{
"docid": "ded1f366eedb42d57bc927de05cefdab",
"text": "A typical knowledge-based question answering (KB-QA) system faces two challenges: one is to transform natural language questions into their meaning representations (MRs); the other is to retrieve answers from knowledge bases (KBs) using generated MRs. Unlike previous methods which treat them in a cascaded manner, we present a translation-based approach to solve these two tasks in one unified framework. We translate questions to answers based on CYK parsing. Answers as translations of the span covered by each CYK cell are obtained by a question translation method, which first generates formal triple queries as MRs for the span based on question patterns and relation expressions, and then retrieves answers from a given KB based on triple queries generated. A linear model is defined over derivations, and minimum error rate training is used to tune feature weights based on a set of question-answer pairs. Compared to a KB-QA system using a state-of-the-art semantic parser, our method achieves better results.",
"title": ""
},
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
},
{
"docid": "be8b65d39ee74dbee0835052092040da",
"text": "We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SIMPLEQUESTIONS dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.",
"title": ""
}
] |
[
{
"docid": "9377e5de9d7a440aa5e73db10aa630f4",
"text": ". Micro-finance programmes targeting women became a major plank of donor poverty alleviation and gender strategies in the 1990s. Increasing evidence of the centrality of gender equality to poverty reduction and women’s higher credit repayment rates led to a general consensus on the desirability of targeting women. Not only ‘reaching’ but also ‘empowering’ women became the second official goal of the Micro-credit Summit Campaign.",
"title": ""
},
{
"docid": "74141327edf56eb5a198f446d12998a0",
"text": "Intramuscular myxomas of the hand are rare entities. Primarily found in the myocardium, these lesions also affect the bone and soft tissues in other parts of the body. This article describes a case of hypothenar muscles myxoma treated with local surgical excision after frozen section biopsy with tumor-free margins. Radiographic images of the axial and appendicular skeleton were negative for fibrous dysplasia, and endocrine studies were within normal limits. The 8-year follow-up period has been uneventful, with no complications. The patient is currently recurrence free, with normal intrinsic hand function.",
"title": ""
},
{
"docid": "5b4f1b4725393a87a83abfe14516dd0c",
"text": "The goal of traffic forecasting is to predict the future vital indicators (such as speed, volume and density) of the local traffic network in reasonable response time. Due to the dynamics and complexity of traffic network flow, typical simulation experiments and classic statistical methods cannot satisfy the requirements of mid-and-long term forecasting. In this work, we propose a novel deep learning framework, Spatio-Temporal Graph Convolutional Neural Network (STGCNN), to tackle this spatio-temporal sequence forecasting task. Instead of applying recurrent models to sequence learning, we build our model entirely on convolutional neural networks (CNNs) with gated linear units (GLU) and highway networks. The proposed architecture fully employs the graph structure of the road networks and enables faster training. Experiments show that our ST-GCNN network captures comprehensive spatio-temporal correlations throughout complex traffic network and consistently outperforms state-of-the-art baseline algorithms on several real-world traffic datasets.",
"title": ""
},
{
"docid": "a04dd1bd1b6107747b2091b8aa2dfeb7",
"text": "This paper presents 300-GHz step-profiled corrugated horn antennas, aiming at their integration in low-temperature co-fired ceramic (LTCC) packages. Using substrate integrated waveguide technology, the cavity inside the multi-layer LTCC substrate and a surrounding via fence are used to form a feeding hollow waveguide and horn structure. Owing to the vertical configuration, we were able to design the corrugations and stepped profile of horn antennas to approximate smooth metallic surface. To verify the design experimentally, the LTCC waveguides and horn antennas were fabricated with an LTCC multi-layer process. The LTCC waveguide exhibits insertion loss of 0.6 dB/mm, and the LTCC horn antenna exhibits 18-dBi peak gain and 100-GHz bandwidth with more than 10-dB return loss. The size of the horn antenna is only 5×5×2.8 mm3, which makes it easy to integrate it in LTCC transceiver modules.",
"title": ""
},
{
"docid": "49ff711b6c91c9ec42e16ce2f3bb435b",
"text": "In this letter, a wideband three-section branch-line hybrid with harmonic suppression is designed using a novel transmission line model. The proposed topology is constructed using a coupled line, two series transmission lines, and open-ended stubs. The required design equations are obtained by applying even- and odd-mode analysis. To support these equations, a three-section branch-line hybrid working at 0.9 GHz is fabricated and tested. The physical area of the prototype is reduced by 87.7% of the conventional hybrid and the fractional bandwidth is greater than 52%. In addition, the proposed technique can eliminate second harmonic by a level better than 15 dB.",
"title": ""
},
{
"docid": "0b17e52a3fd306c1e990b628d41a973f",
"text": "Electronic health records (EHRs) have contributed to the computerization of patient records so that it can be used not only for efficient and systematic medical services, but also for research on data science. In this paper, we compared disease prediction performance of generative adversarial networks (GANs) and conventional learning algorithms in combination with missing value prediction methods. As a result, the highest accuracy of 98.05% was obtained using stacked autoencoder as the missing value prediction method and auxiliary classifier GANs (AC-GANs) as the disease predicting method. Results show that the combination of stacked autoencoder and AC-GANs performs significantly greater than existing algorithms at the problem of disease prediction in which missing values and class imbalance exist.",
"title": ""
},
{
"docid": "21d9828d0851b4ded34e13f8552f3e24",
"text": "Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.",
"title": ""
},
{
"docid": "8470245ef870eb5246d65fa3eb1e760a",
"text": "Educational spaces play an important role in enhancing learning productivity levels of society people as the most important places to human train. Considering the cost, time and energy spending on these spaces, trying to design efficient and optimized environment is a necessity. Achieving efficient environments requires changing environmental criteria so that they can have a positive impact on the activities and learning in users. Therefore, creating suitable conditions for promoting learning in users requires full utilization of the comprehensive knowledge of architecture and the design of the physical environment with respect to the environmental, social and aesthetic dimensions; Which will naturally increase the usefulness of people in space and make optimal use of the expenses spent on building schools and the time spent on education and training.The main aim of this study was to find physical variables affecting on increasing productivity in learning environments. This study is quantitative-qualitative and was done in two research methods: a) survey research methods (survey) b) correlation method. The samples were teachers and students in secondary schools’ in Zahedan city, the sample size was 310 people. Variables were extracted using the literature review and deep interviews with professors and experts. The questionnaire was obtained using variables and it is used to collect the views of teachers and students. Cronbach’s alpha coefficient was 0.89 which indicates that the information gathering tool is acceptable. The findings shows that there are four main physical factor as: 1. Physical comfort, 2. Space layouts, 3. Psychological factors and 4. Visual factors thet they are affecting positively on space productivity. Each of the environmental factors play an important role in improving the learning quality and increasing interest in attending learning environments; therefore, the desired environment improves the productivity of the educational spaces by improving the components of productivity.",
"title": ""
},
{
"docid": "cbc59d5b33865b56e549fd2ffbc43c4a",
"text": "We propose a theory that gives formal semantics to word-level alignments defined over parallel corpora. We use our theory to introduce a linear algorithm that can be used to derive from word-aligned, parallel corpora the minimal set of syntactically motivated transformation rules that explain human translation data.",
"title": ""
},
{
"docid": "f48639ad675b863a28bb1bc773664ab0",
"text": "The definition and phenomenological features of 'burnout' and its eventual relationship with depression and other clinical conditions are reviewed. Work is an indispensable way to make a decent and meaningful way of living, but can also be a source of stress for a variety of reasons. Feelings of inadequate control over one's work, frustrated hopes and expectations and the feeling of losing of life's meaning, seem to be independent causes of burnout, a term that describes a condition of professional exhaustion. It is not synonymous with 'job stress', 'fatigue', 'alienation' or 'depression'. Burnout is more common than generally believed and may affect every aspect of the individual's functioning, have a deleterious effect on interpersonal and family relationships and lead to a negative attitude towards life in general. Empirical research suggests that burnout and depression are separate entities, although they may share several 'qualitative' characteristics, especially in the more severe forms of burnout, and in vulnerable individuals, low levels of satisfaction derived from their everyday work. These final issues need further clarification and should be the focus of future clinical research.",
"title": ""
},
{
"docid": "8aabafcfbb8a1b23e986fc9f4dbf5b01",
"text": "OBJECTIVE\nTo examine the factors associated with the persistence of childhood gender dysphoria (GD), and to assess the feelings of GD, body image, and sexual orientation in adolescence.\n\n\nMETHOD\nThe sample consisted of 127 adolescents (79 boys, 48 girls), who were referred for GD in childhood (<12 years of age) and followed up in adolescence. We examined childhood differences among persisters and desisters in demographics, psychological functioning, quality of peer relations and childhood GD, and adolescent reports of GD, body image, and sexual orientation. We examined contributions of childhood factors on the probability of persistence of GD into adolescence.\n\n\nRESULTS\nWe found a link between the intensity of GD in childhood and persistence of GD, as well as a higher probability of persistence among natal girls. Psychological functioning and the quality of peer relations did not predict the persistence of childhood GD. Formerly nonsignificant (age at childhood assessment) and unstudied factors (a cognitive and/or affective cross-gender identification and a social role transition) were associated with the persistence of childhood GD, and varied among natal boys and girls.\n\n\nCONCLUSION\nIntensity of early GD appears to be an important predictor of persistence of GD. Clinical recommendations for the support of children with GD may need to be developed independently for natal boys and for girls, as the presentation of boys and girls with GD is different, and different factors are predictive for the persistence of GD.",
"title": ""
},
{
"docid": "476bb80edf6c54f0b6415d19f027ee19",
"text": "Spin-transfer torque (STT) switching demonstrated in submicron sized magnetic tunnel junctions (MTJs) has stimulated considerable interest for developments of STT switched magnetic random access memory (STT-MRAM). Remarkable progress in STT switching with MgO MTJs and increasing interest in STTMRAM in semiconductor industry have been witnessed in recent years. This paper will present a review on the progress in the intrinsic switching current density reduction and STT-MRAM prototype chip demonstration. Challenges to overcome in order for STT-MRAM to be a mainstream memory technology in future technology nodes will be discussed. Finally, potential applications of STT-MRAM in embedded and standalone memory markets will be outlined.",
"title": ""
},
{
"docid": "7cc3da275067df8f6c017da37025856c",
"text": "A simple, green method is described for the synthesis of Gold (Au) and Silver (Ag) nanoparticles (NPs) from the stem extract of Breynia rhamnoides. Unlike other biological methods for NP synthesis, the uniqueness of our method lies in its fast synthesis rates (~7 min for AuNPs) and the ability to tune the nanoparticle size (and subsequently their catalytic activity) via the extract concentration used in the experiment. The phenolic glycosides and reducing sugars present in the extract are largely responsible for the rapid reduction rates of Au(3+) ions to AuNPs. Efficient reduction of 4-nitrophenol (4-NP) to 4-aminophenol (4-AP) in the presence of AuNPs (or AgNPs) and NaBH(4) was observed and was found to depend upon the nanoparticle size or the stem extract concentration used for synthesis.",
"title": ""
},
{
"docid": "3921107e01c28a9b739f10c51a48505f",
"text": "The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.",
"title": ""
},
{
"docid": "ac53cbf7b760978a4a4c7fa80095fd31",
"text": "Aggregation queries on data streams are evaluated over evolving and often overlapping logical views called windows. While the aggregation of periodic windows were extensively studied in the past through the use of aggregate sharing techniques such as Panes and Pairs, little to no work has been put in optimizing the aggregation of very common, non-periodic windows. Typical examples of non-periodic windows are punctuations and sessions which can implement complex business logic and are often expressed as user-defined operators on platforms such as Google Dataflow or Apache Storm. The aggregation of such non-periodic or user-defined windows either falls back to expensive, best-effort aggregate sharing methods, or is not optimized at all.\n In this paper we present a technique to perform efficient aggregate sharing for data stream windows, which are declared as user-defined functions (UDFs) and can contain arbitrary business logic. To this end, we first introduce the concept of User-Defined Windows (UDWs), a simple, UDF-based programming abstraction that allows users to programmatically define custom windows. We then define semantics for UDWs, based on which we design Cutty, a low-cost aggregate sharing technique. Cutty improves and outperforms the state of the art for aggregate sharing on single and multiple queries. Moreover, it enables aggregate sharing for a broad class of non-periodic UDWs. We implemented our techniques on Apache Flink, an open source stream processing system, and performed experiments demonstrating orders of magnitude of reduction in aggregation costs compared to the state of the art.",
"title": ""
},
{
"docid": "84e8986eff7cb95808de8df9ac286e37",
"text": "The purpose of this thesis is to describe one-shot-learning gesture recognition systems developed on the ChaLearn Gesture Dataset [3]. We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition. The Quadratic-Chi distance family is used to measure differences between histograms to capture cross-bin relationships. We also propose a new algorithm for trimming videos — to remove all the unimportant frames from videos. Our two methods both outperform other published methods and help narrow down the gap between human performance and algorithms on this task. The code has been made publicly available in the MLOSS repository.",
"title": ""
},
{
"docid": "ae687136682fd78e9a92797c2c24ddb0",
"text": "Not all global health issues are truly global, but the neglected epidemic of stillbirths is one such urgent concern. The Lancet’s fi rst Series on stillbirths was published in 2011. Thanks to tenacious eff orts by the authors of that Series, led by Joy Lawn, together with the impetus of a wider maternal and child health community, stillbirths have been recognised as an essential part of the post-2015 sustainable development agenda, expressed through a new Global Strategy for Women’s, Children’s and Adolescents’ Health which was launched at the UN General Assembly in 2015. But recognising is not the same as doing. We now present a second Series on stillbirths, which is predicated on the idea of ending preventable stillbirth deaths by 2030. As this Series amply proves, such an ambitious goal is possible. The fi ve Series papers off er a roadmap for eliminating one of the most neglected tragedies in global health today. Perhaps the greatest obstacle to addressing stillbirths is stigma. The utter despair and hopelessness felt by families who suff er a stillbirth is often turned inwards to fuel feelings of shame and failure. The idea of demanding action would be anathema for many women and men who have experienced the loss of a child in this appalling way. This Series dispels any notion that such self-recrimination is justifi ed. Most stillbirths have preventable causes—maternal infections, chronic diseases, undernutrition, obesity, to name only a few. The solutions to ending preventable stillbirths are therefore practicable, feasible, and cost eff ective. They form a core part of the continuum of care—from prenatal care and antenatal care, through skilled birth attendance, to newborn care. The number of stillbirths remains alarmingly high: 2·6 million stillbirths annually, with little reduction this past decade. But the truly horrifi c fi gure is 1·3 million intrapartum stillbirths. The idea of a child being alive at the beginning of labour and dying for entirely preventable reasons during the next few hours should be a health scandal of international proportions. Yet it is not. Our Series aims to make it so. When a stillbirth does occur, the health system can fail parents further by the absence of respectful, empathetic services, including bereavement care. Yet provision of such care is not only humane and necessary, it can also mitigate a range of negative emotional and psychological symptoms that mothers and fathers experience after the death of their baby, some of which can persist long after their loss. Ten nations account for two-thirds of stillbirths: India, Nigeria, Pakistan, China, Ethiopia, Democratic Republic of the Congo, Bangladesh, Indonesia, Tanzania, and Niger. Although 98% of stillbirths take place in low-income and middle-income countries, stillbirth rates also remain unacceptably high in high-income settings. Why? Partly because stillbirths are strongly linked to adverse social and economic determinants of health. The health system alone cannot address entirely the predicament of stillbirths. Only by tackling the causes of the causes of stillbirths will rates be defl ected downwards in high-income settings. There is one action we believe off ers promising prospects for accelerating progress to end stillbirths—stronger independent accountability both within countries and globally. By accountability, we mean better monitoring (with investment in high-quality data collection), stronger review (including, especially, civil society organisations), and more robust action (high-level political leadership, and not merely from a Ministry of Health). The UN’s new Independent Accountability Panel has an important part to play in this process. But the really urgent need is for stronger independent accountability in countries. And here is where a virtuous alliance might lie between health professionals, clinical and public health scientists, and civil society, including bereaved parents. We believe this Series off ers the spark to ignite a new alliance of common interests to end preventable stillbirths by 2030.",
"title": ""
},
{
"docid": "2f737bc87916e67b68aa96910d27b2cb",
"text": "-Imbalanced data set problem occurs in classification, where the number of instances of one class is much lower than the instances of the other classes. The main challenge in imbalance problem is that the small classes are often more useful, but standard classifiers tend to be weighed down by the huge classes and ignore the tiny ones. In machine learning the imbalanced datasets has become a critical problem and also usually found in many applications such as detection of fraudulent calls, bio-medical, engineering, remote-sensing, computer society and manufacturing industries. In order to overcome the problems several approaches have been proposed. In this paper a study on Imbalanced dataset problem and the solution is given.",
"title": ""
},
{
"docid": "dde9424652393fa66350ec6510c20e97",
"text": "Framed under a cognitive approach to task-based L2 learning, this study used a pedagogical approach to investigate the effects of three vocabulary lessons (one traditional and two task-based) on acquisition of basic meanings, forms and morphological aspects of Spanish words. Quantitative analysis performed on the data suggests that the type of pedagogical approach had no impact on immediate retrieval (after treatment) of targeted word forms, but it had an impact on long-term retrieval (one week) of targeted forms. In particular, task-based lessons seemed to be more effective than the Presentation, Practice and Production (PPP) lesson. The analysis also suggests that a task-based lesson with an explicit focus-on-forms component was more effective than a task-based lesson that did not incorporate this component in promoting acquisition of word morphological aspects. The results also indicate that the explicit focus on forms component may be more effective when placed at the end of the lesson, when meaning has been acquired. Results are explained in terms of qualitative differences in amounts of focus on form and meaning, type of form-focused instruction provided, and opportunities for on-line targeted output retrieval. The findings of this study provide evidence for the value of a proactive (Doughty and Williams, 1998a) form-focused approach to Task-Based L2 vocabulary learning, especially structure-based production tasks (Ellis, 2003). Overall, they suggest an important role of pedagogical tasks in teaching L2 vocabulary.",
"title": ""
},
{
"docid": "62cc85ab7517797f50ce5026fbc5617a",
"text": "OBJECTIVE\nTo assess for the first time the morphology of the lymphatic system in patients with lipedema and lipo-lymphedema of the lower extremities by MR lymphangiography.\n\n\nMATERIALS AND METHODS\n26 lower extremities in 13 consecutive patients (5 lipedema, 8 lipo-lymphedema) were examined by MR lymphangiography. 18 mL of gadoteridol and 1 mL of mepivacainhydrochloride 1% were subdivided into 10 portions and injected intracutaneously in the forefoot. MR imaging was performed with a 1.5-T system equipped with high-performance gradients. For MR lymphangiography, a 3D-spoiled gradient-echo sequence was used. For evaluation of the lymphedema a heavily T2-weighted 3D-TSE sequence was performed.\n\n\nRESULTS\nIn all 16 lower extremities (100%) with lipo-lymphedema, high signal intensity areas in the epifascial region could be detected on the 3D-TSE sequence. In the 16 examined lower extremities with lipo-lymphedema, 8 lower legs and 3 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 3 mm. In two lower legs with lipo-lymphedema, an area of dermal back-flow was seen, indicating lymphatic outflow obstruction. In the 10 examined lower extremities with clinically pure lipedema, 4 lower legs and 2 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 2 mm, indicating a subclinical status of lymphedema. In all examined extremities, the inguinal lymph nodes demonstrated a contrast material enhancement in the first image acquisition 15 min after injection.\n\n\nCONCLUSION\nMR lymphangiography is a safe and accurate minimal-invasive imaging modality for the evaluation of the lymphatic circulation in patients with lipedema and lipo-lymphedema of the lower extremities. If the extent of lymphatic involvement is unclear at the initial clinical examination or requires a better definition for optimal therapeutic planning, MR lymphangiography is able to identify the anatomic and physiological derangements and to establish an objective baseline.",
"title": ""
}
] |
scidocsrr
|
70beaf80a2f11968730833a41d927927
|
EE-Grad: Exploration and Exploitation for Cost-Efficient Mini-Batch SGD
|
[
{
"docid": "938395ce421e0fede708e3b4ab7185b5",
"text": "This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.",
"title": ""
}
] |
[
{
"docid": "eaf16b3e9144426aed7edc092ad4a649",
"text": "In order to use a synchronous dynamic RAM (SDRAM) as the off-chip memory of an H.264/AVC encoder, this paper proposes an efficient SDRAM memory controller with an asynchronous bridge. With the proposed architecture, the SDRAM bandwidth is increased by making the operation frequency of an external SDRAM higher than that of the hardware accelerators of an H.264/AVC encoder. Experimental results show that the encoding speed is increased by 30.5% when the SDRAM clock frequency is increased from 100 MHz to 200 MHz while the H.264/AVC hardware accelerators operate at 100 MHz.",
"title": ""
},
{
"docid": "53a1d344a6e38dd790e58c6952e51cdb",
"text": "The thermal conductivities of individual single crystalline intrinsic Si nanowires with diameters of 22, 37, 56, and 115 nm were measured using a microfabricated suspended device over a temperature range of 20–320 K. Although the nanowires had well-defined crystalline order, the thermal conductivity observed was more than two orders of magnitude lower than the bulk value. The strong diameter dependence of thermal conductivity in nanowires was ascribed to the increased phonon-boundary scattering and possible phonon spectrum modification. © 2003 American Institute of Physics.@DOI: 10.1063/1.1616981 #",
"title": ""
},
{
"docid": "441e22ca7323b7490cbdf7f5e6e85a80",
"text": "Familial gigantiform cementoma (FGC) is a rare autosomal dominant, benign fibro-cemento-osseous lesion of the jaws that can cause severe facial deformity. True FGC with familial history is extremely rare and there has been no literature regarding the radiological follow-up of FGC. We report a case of recurrent FGC in an Asian female child who has been under our observation for 6 years since she was 15 months old. After repeated recurrences and subsequent surgeries, the growth of the tumor had seemed to plateau on recent follow-up CT images. The transition from an enhancing soft tissue lesion to a homogeneous bony lesion on CT may indicate decreased growth potential of FGC.",
"title": ""
},
{
"docid": "76d27ae5220bdd692448797e8115d658",
"text": "Abstinence following daily marijuana use can produce a withdrawal syndrome characterized by negative mood (eg irritability, anxiety, misery), muscle pain, chills, and decreased food intake. Two placebo-controlled, within-subject studies investigated the effects of a cannabinoid agonist, delta-9-tetrahydrocannabinol (THC: Study 1), and a mood stabilizer, divalproex (Study 2), on symptoms of marijuana withdrawal. Participants (n=7/study), who were not seeking treatment for their marijuana use, reported smoking 6–10 marijuana cigarettes/day, 6–7 days/week. Study 1 was a 15-day in-patient, 5-day outpatient, 15-day in-patient design. During the in-patient phases, participants took oral THC capsules (0, 10 mg) five times/day, 1 h prior to smoking marijuana (0.00, 3.04% THC). Active and placebo marijuana were smoked on in-patient days 1–8, while only placebo marijuana was smoked on days 9–14, that is, marijuana abstinence. Placebo THC was administered each day, except during one of the abstinence phases (days 9–14), when active THC was given. Mood, psychomotor task performance, food intake, and sleep were measured. Oral THC administered during marijuana abstinence decreased ratings of ‘anxious’, ‘miserable’, ‘trouble sleeping’, ‘chills’, and marijuana craving, and reversed large decreases in food intake as compared to placebo, while producing no intoxication. Study 2 was a 58-day, outpatient/in-patient design. Participants were maintained on each divalproex dose (0, 1500 mg/day) for 29 days each. Each maintenance condition began with a 14-day outpatient phase for medication induction or clearance and continued with a 15-day in-patient phase. Divalproex decreased marijuana craving during abstinence, yet increased ratings of ‘anxious’, ‘irritable’, ‘bad effect’, and ‘tired.’ Divalproex worsened performance on psychomotor tasks, and increased food intake regardless of marijuana condition. Thus, oral THC decreased marijuana craving and withdrawal symptoms at a dose that was subjectively indistinguishable from placebo. Divalproex worsened mood and cognitive performance during marijuana abstinence. These data suggest that oral THC, but not divalproex, may be useful in the treatment of marijuana dependence.",
"title": ""
},
{
"docid": "c213dd0989659d413b39e6698eb097cc",
"text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the the major transitions in evolution. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book.",
"title": ""
},
{
"docid": "9f9128951d6c842689f61fc19c79f238",
"text": "This paper concerns image reconstruction for helical x-ray transmission tomography (CT) with multi-row detectors. We introduce two approximate cone-beam (CB) filtered-backprojection (FBP) algorithms of the Feldkamp type, obtained by extending to three dimensions (3D) two recently proposed exact FBP algorithms for 2D fan-beam reconstruction. The new algorithms are similar to the standard Feldkamp-type FBP for helical CT. In particular, they can reconstruct each transaxial slice from data acquired along an arbitrary segment of helix, thereby efficiently exploiting the available data. In contrast to the standard Feldkamp-type algorithm, however, the redundancy weight is applied after filtering, allowing a more efficient numerical implementation. To partially alleviate the CB artefacts, which increase with increasing values of the helical pitch, a frequency-mixing method is proposed. This method reconstructs the high frequency components of the image using the longest possible segment of helix, whereas the low frequencies are reconstructed using a minimal, short-scan, segment of helix to minimize CB artefacts. The performance of the algorithms is illustrated using simulated data.",
"title": ""
},
{
"docid": "5fe43f0b23b0cfd82b414608e60db211",
"text": "The Distress Analysis Interview Corpus (DAIC) contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post traumatic stress disorder. The interviews are conducted by humans, human controlled agents and autonomous agents, and the participants include both distressed and non-distressed individuals. Data collected include audio and video recordings and extensive questionnaire responses; parts of the corpus have been transcribed and annotated for a variety of verbal and non-verbal features. The corpus has been used to support the creation of an automated interviewer agent, and for research on the automatic identification of psychological distress.",
"title": ""
},
{
"docid": "c7d54d4932792f9f1f4e08361716050f",
"text": "In this paper, we address several puzzles concerning speech acts,particularly indirect speech acts. We show how a formal semantictheory of discourse interpretation can be used to define speech actsand to avoid murky issues concerning the metaphysics of action. Weprovide a formally precise definition of indirect speech acts, includingthe subclass of so-called conventionalized indirect speech acts. Thisanalysis draws heavily on parallels between phenomena at the speechact level and the lexical level. First, we argue that, just as co-predicationshows that some words can behave linguistically as if they're `simultaneously'of incompatible semantic types, certain speech acts behave this way too.Secondly, as Horn and Bayer (1984) and others have suggested, both thelexicon and speech acts are subject to a principle of blocking or ``preemptionby synonymy'': Conventionalized indirect speech acts can block their`paraphrases' from being interpreted as indirect speech acts, even ifthis interpretation is calculable from Gricean-style principles. Weprovide a formal model of this blocking, and compare it withexisting accounts of lexical blocking.",
"title": ""
},
{
"docid": "bee944285ddd3e1e51e5056720a91aa0",
"text": "The iterative Born approximation (IBA) is a well-known method for describing waves scattered by semitransparent objects. In this letter, we present a novel nonlinear inverse scattering method that combines IBA with an edge-preserving total variation regularizer. The proposed method is obtained by relating iterations of IBA to layers of an artificial multilayer neural network and developing a corresponding error backpropagation algorithm for efficiently estimating the permittivity of the object. Simulations illustrate that, by accounting for multiple scattering, the method successfully recovers the permittivity distribution where the traditional linear inverse scattering fails.",
"title": ""
},
{
"docid": "d70214bbb417b0ff7d4a6efbb24abfb6",
"text": "While deep reinforcement learning techniques have recently produced considerable achievements on many decision-making problems, their use in robotics has largely been limited to simulated worlds or restricted motions, since unconstrained trial-and-error interactions in the real world can have undesirable consequences for the robot or its environment. To overcome such limitations, we propose a novel reinforcement learning architecture, OptLayer, that takes as inputs possibly unsafe actions predicted by a neural network and outputs the closest actions that satisfy chosen constraints. While learning control policies often requires carefully crafted rewards and penalties while exploring the range of possible actions, OptLayer ensures that only safe actions are actually executed and unsafe predictions are penalized during training. We demonstrate the effectiveness of our approach on robot reaching tasks, both simulated and in the real world.",
"title": ""
},
{
"docid": "86ef6a2a5c4f32c466bd3595a828bafb",
"text": "Rectus femoris muscle proximal injuries are not rare conditions. The proximal rectus femoris tendinous anatomy is complex and may be affected by traumatic, microtraumatic, or nontraumatic disorders. A good knowledge of the proximal rectus femoris anatomy allows a better understanding of injury and disorder patterns. A new sonographic lateral approach was recently described to assess the indirect head of the proximal rectus femoris, hence allowing for a complete sonographic assessment of the proximal rectus femoris tendons. This article will review sonographic features of direct, indirect, and conjoined rectus femoris tendon disorders.",
"title": ""
},
{
"docid": "1527601285eb1b2ef2de040154e3d4fb",
"text": "This paper exploits the context of natural dynamic scenes for human action recognition in video. Human actions are frequently constrained by the purpose and the physical properties of scenes and demonstrate high correlation with particular scene classes. For example, eating often happens in a kitchen while running is more common outdoors. The contribution of this paper is three-fold: (a) we automatically discover relevant scene classes and their correlation with human actions, (b) we show how to learn selected scene classes from video without manual supervision and (c) we develop a joint framework for action and scene recognition and demonstrate improved recognition of both in natural video. We use movie scripts as a means of automatic supervision for training. For selected action classes we identify correlated scene classes in text and then retrieve video samples of actions and scenes for training using script-to-video alignment. Our visual models for scenes and actions are formulated within the bag-of-features framework and are combined in a joint scene-action SVM-based classifier. We report experimental results and validate the method on a new large dataset with twelve action classes and ten scene classes acquired from 69 movies.",
"title": ""
},
{
"docid": "09e8e50db9ca9af79005013b73bbb250",
"text": "The number of tools for dynamics simulation has grown in the last years. It is necessary for the robotics community to have elements to ponder which of the available tools is the best for their research. As a complement to an objective and quantitative comparison, difficult to obtain since not all the tools are open-source, an element of evaluation is user feedback. With this goal in mind, we created an online survey about the use of dynamical simulation in robotics. This paper reports the analysis of the participants’ answers and a descriptive information fiche for the most relevant tools. We believe this report will be helpful for roboticists to choose the best simulation tool for their researches.",
"title": ""
},
{
"docid": "68f422172815df9fff6bf515bf7ea803",
"text": "Active learning (AL) promises to reduce the cost of annotating labeled datasets for trainable human language technologies. Contrary to expectations, when creating labeled training material for HPSG parse selection and latereusing it with other models, gains from AL may be negligible or even negative. This has serious implications for using AL, showing that additional cost-saving strategies may need to be adopted. We explore one such strategy: using a model during annotation to automate some of the decisions. Our best results show an 80% reduction in annotation cost compared with labeling randomly selected data with a single model.",
"title": ""
},
{
"docid": "3d3927d6be7ab9575439a3e26102852f",
"text": "A fundamental frequency (F0) estimator named Harvest is described. The unique points of Harvest are that it can obtain a reliable F0 contour and reduce the error that the voiced section is wrongly identified as the unvoiced section. It consists of two steps: estimation of F0 candidates and generation of a reliable F0 contour on the basis of these candidates. In the first step, the algorithm uses fundamental component extraction by many band-pass filters with different center frequencies and obtains the basic F0 candidates from filtered signals. After that, basic F0 candidates are refined and scored by using the instantaneous frequency, and then several F0 candidates in each frame are estimated. Since the frame-by-frame processing based on the fundamental component extraction is not robust against temporally local noise, a connection algorithm using neighboring F0s is used in the second step. The connection takes advantage of the fact that the F0 contour does not precipitously change in a short interval. We carried out an evaluation using two speech databases with electroglottograph (EGG) signals to compare Harvest with several state-of-the-art algorithms. Results showed that Harvest achieved the best performance of all algorithms.",
"title": ""
},
{
"docid": "6a4844bf755830d14fb24caff1aa8442",
"text": "We present a stochastic first-order optimization algorithm, named BCSC, that adds a cyclic constraint to stochastic block-coordinate descent. It uses different subsets of the data to update different subsets of the parameters, thus limiting the detrimental effect of outliers in the training set. Empirical tests in benchmark datasets show that our algorithm outperforms state-of-the-art optimization methods in both accuracy as well as convergence speed. The improvements are consistent across different architectures, and can be combined with other training techniques and regularization methods.",
"title": ""
},
{
"docid": "b13286a4875d30f6d32b43dd5d95bd79",
"text": "The complexity of indoor radio propagation has resulted in location-awareness being derived from empirical fingerprinting techniques, where positioning is performed via a previously-constructed radio map, usually of WiFi signals. The recent introduction of the Bluetooth Low Energy (BLE) radio protocol provides new opportunities for indoor location. It supports portable battery-powered beacons that can be easily distributed at low cost, giving it distinct advantages over WiFi. However, its differing use of the radio band brings new challenges too. In this work, we provide a detailed study of BLE fingerprinting using 19 beacons distributed around a ~600 m2 testbed to position a consumer device. We demonstrate the high susceptibility of BLE to fast fading, show how to mitigate this, and quantify the true power cost of continuous BLE scanning. We further investigate the choice of key parameters in a BLE positioning system, including beacon density, transmit power, and transmit frequency. We also provide quantitative comparison with WiFi fingerprinting. Our results show advantages to the use of BLE beacons for positioning. For one-shot (push-to-fix) positioning we achieve <; 2.6 m error 95% of the time for a dense BLE network (1 beacon per 30 m2), compared to <; 4.8 m for a reduced density (1 beacon per 100 m2) and <; 8.5 m for an established WiFi network in the same area.",
"title": ""
},
{
"docid": "08bb027bc95762431350d2260570faa0",
"text": "RetSim is an agent-based simulator of a shoe store based on the transactional data of one of the largest retail shoe sellers in Sweden. The aim of RetSim is the generation of synthetic data that can be used for fraud detection research. Statistical and a Social Network Analysis (SNA) of relations between staff and customers was used to develop and calibrate the model. Our ultimate goal is for RetSim to be usable to model relevant scenarios to generate realistic data sets that can be used by academia, and others, to develop and reason about fraud detection methods without leaking any sensitive information about the underlying data. Synthetic data has the added benefit of being easier to acquire, faster and at less cost, for experimentation even for those that have access to their own data. We argue that RetSim generates data that usefully approximates the relevant aspects of the real data.",
"title": ""
},
{
"docid": "4ceab082d195c1f69bb98793852f4a29",
"text": "This paper presents a 22 to 26.5 Gb/s optical receiver with an all-digital clock and data recovery (AD-CDR) fabricated in a 65 nm CMOS process. The receiver consists of an optical front-end and a half-rate bang-bang clock and data recovery circuit. The optical front-end achieves low power consumption by using inverter-based amplifiers and realizes sufficient bandwidth by applying several bandwidth extension techniques. In addition, in order to minimize additional jitter at the front-end, not only magnitude and bandwidth but also group-delay responses are considered. The AD-CDR employs an LC quadrature digitally controlled oscillator (LC-QDCO) to achieve a high phase noise figure-of-merit at tens of gigahertz. The recovered clock jitter is 1.28 ps rms and the measured jitter tolerance exceeds the tolerance mask specified in IEEE 802.3ba. The receiver sensitivity is 106 and 184 for a bit error rate of 10-12 at data rates of 25 and 26.5 Gb/s, respectively. The entire receiver chip occupies an active die area of 0.75 mm2 and consumes 254 mW at a data rate of 26.5 Gb/s. The energy efficiencies of the front-end and entire receiver at 26.5 Gb/s are 1.35 and 9.58 pJ/bit, respectively.",
"title": ""
},
{
"docid": "af7d318e1c203358c87592d0c6bcb4d2",
"text": "A fundamental component of spatial modulation (SM), termed generalized space shift keying (GSSK), is presented. GSSK modulation inherently exploits fading in wireless communication to provide better performance over conventional amplitude/phase modulation (APM) techniques. In GSSK, only the antenna indices, and not the symbols themselves (as in the case of SM and APM), relay information. We exploit GSSKpsilas degrees of freedom to achieve better performance, which is done by formulating its constellation in an optimal manner. To support our results, we also derive upper bounds on GSSKpsilas bit error probability, where the source of GSSKpsilas strength is made clear. Analytical and simulation results show performance gains (1.5-3 dB) over popular multiple antenna APM systems (including Bell Laboratories layered space time (BLAST) and maximum ratio combining (MRC) schemes), making GSSK an excellent candidate for future wireless applications.",
"title": ""
}
] |
scidocsrr
|
f879ae22c409e9a62a6576f7912b257b
|
Software debugging, testing, and verification
|
[
{
"docid": "d733f07d3b022ad8a7020c05292bcddd",
"text": "In Chapter 9 we discussed quality management models with examples of in-process metrics and reports. The models cover both the front-end design and coding activities and the back-end testing phases of development. The focus of the in-process data and reports, however, are geared toward the design review and code inspection data, although testing data is included. This chapter provides a more detailed discussion of the in-process metrics from the testing perspective. 1 These metrics have been used in the IBM Rochester software development laboratory for some years with continual evolution and improvement, so there is ample implementation experience with them. This is important because although there are numerous metrics for software testing, and new ones being proposed frequently, relatively few are supported by sufficient experiences of industry implementation to demonstrate their usefulness. For each metric, we discuss its purpose, data, interpretation , and use, and provide a graphic example based on real-life data. Then we discuss in-process quality management vis-à-vis these metrics and revisit the metrics 271 1. This chapter is a modified version of a white paper written for the IBM corporate-wide Software Test Community Leaders (STCL) group, which was published as \" In-process Metrics for Software Testing, \" in",
"title": ""
}
] |
[
{
"docid": "143da39941ecc8fb69e87d611503b9c0",
"text": "A dual-core 64b Xeonreg MP processor is implemented in a 65nm 8M process. The 435mm2 die has 1.328B transistors. Each core has two threads and a unified 1MB L2 cache. The 16MB unified, 16-way set-associative L3 cache implements both sleep and shut-off leakage reduction modes",
"title": ""
},
{
"docid": "fa1440ce586681326b18807e41e5465a",
"text": "Governments and businesses increasingly rely on data analytics and machine learning (ML) for improving their competitive edge in areas such as consumer satisfaction, threat intelligence, decision making, and product efficiency. However, by cleverly corrupting a subset of data used as input to a target’s ML algorithms, an adversary can perturb outcomes and compromise the effectiveness of ML technology. While prior work in the field of adversarial machine learning has studied the impact of input manipulation on correct ML algorithms, we consider the exploitation of bugs in ML implementations. In this paper, we characterize the attack surface of ML programs, and we show that malicious inputs exploiting implementation bugs enable strictly more powerful attacks than the classic adversarial machine learning techniques. We propose a semi-automated technique, called steered fuzzing, for exploring this attack surface and for discovering exploitable bugs in machine learning programs, in order to demonstrate the magnitude of this threat. As a result of our work, we responsibly disclosed five vulnerabilities, established three new CVE-IDs, and illuminated a common insecure practice across many machine learning systems. Finally, we outline several research directions for further understanding and mitigating this threat.",
"title": ""
},
{
"docid": "dc8d9a7da61aab907ee9def56dfbd795",
"text": "The ability to detect change-points in a dynamic network or a time series of graphs is an increasingly important task in many applications of the emerging discipline of graph signal processing. This paper formulates change-point detection as a hypothesis testing problem in terms of a generative latent position model, focusing on the special case of the Stochastic Block Model time series. We analyze two classes of scan statistics, based on distinct underlying locality statistics presented in the literature. Our main contribution is the derivation of the limiting properties and power characteristics of the competing scan statistics. Performance is compared theoretically, on synthetic data, and empirically, on the Enron email corpus.",
"title": ""
},
{
"docid": "f5311de600d7e50d5c9ecff5c49f7167",
"text": "Most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read. However, many real-world question answering problems require the reading of text not because it contains the literal answer, but because it contains a recipe to derive an answer together with the reader’s background knowledge. One example is the task of interpreting regulations to answer “Can I...?” or “Do I have to...?” questions such as “I am working in Canada. Do I have to carry on paying UK National Insurance?” after reading a UK government website about this topic. This task requires both the interpretation of rules and the application of background knowledge. It is further complicated due to the fact that, in practice, most questions are underspecified, and a human assistant will regularly have to ask clarification questions such as “How long have you been working abroad?” when the answer cannot be directly derived from the question and text. In this paper, we formalise this task and develop a crowd-sourcing strategy to collect 32k task instances based on real-world rules and crowd-generated questions and scenarios. We analyse the challenges of this task and assess its difficulty by evaluating the performance of rule-based and machine-learning baselines. We observe promising results when no background knowledge is necessary, and substantial room for improvement whenever background knowledge is needed.",
"title": ""
},
{
"docid": "da63c4d9cc2f3278126490de54c34ce5",
"text": "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.",
"title": ""
},
{
"docid": "982d7d2d65cddba4fa7dac3c2c920790",
"text": "In this paper, we present our multichannel neural architecture for recognizing emerging named entity in social media messages, which we applied in the Novel and Emerging Named Entity Recognition shared task at the EMNLP 2017 Workshop on Noisy User-generated Text (W-NUT). We propose a novel approach, which incorporates comprehensive word representations with multichannel information and Conditional Random Fields (CRF) into a traditional Bidirectional Long Short-Term Memory (BiLSTM) neural network without using any additional hand-crafted features such as gazetteers. In comparison with other systems participating in the shared task, our system won the 3rd place in terms of the average of two evaluation metrics.",
"title": ""
},
{
"docid": "509fa5630ed7e3e7bd914fb474da5071",
"text": "Languages with rich type systems are beginning to employ a blend of type inference and type checking, so that the type inference engine is guided by programmer-supplied type annotations. In this paper we show, for the first time, how to combine the virtues of two well-established ideas: unification-based inference, and bidi-rectional propagation of type annotations. The result is a type system that conservatively extends Hindley-Milner, and yet supports both higher-rank types and impredicativity.",
"title": ""
},
{
"docid": "772fc1cf2dd2837227facd31f897dba3",
"text": "Eighty-three brains obtained at autopsy from nondemented and demented individuals were examined for extracellular amyloid deposits and intraneuronal neurofibrillary changes. The distribution pattern and packing density of amyloid deposits turned out to be of limited significance for differentiation of neuropathological stages. Neurofibrillary changes occurred in the form of neuritic plaques, neurofibrillary tangles and neuropil threads. The distribution of neuritic plaques varied widely not only within architectonic units but also from one individual to another. Neurofibrillary tangles and neuropil threads, in contrast, exhibited a characteristic distribution pattern permitting the differentiation of six stages. The first two stages were characterized by an either mild or severe alteration of the transentorhinal layer Pre-α (transentorhinal stages I–II). The two forms of limbic stages (stages III–IV) were marked by a conspicuous affection of layer Pre-α in both transentorhinal region and proper entorhinal cortex. In addition, there was mild involvement of the first Ammon's horn sector. The hallmark of the two isocortical stages (stages V–VI) was the destruction of virtually all isocortical association areas. The investigation showed that recognition of the six stages required qualitative evaluation of only a few key preparations.",
"title": ""
},
{
"docid": "1cfa5ee5d737e42487e6aa1bdf2cafc9",
"text": "This article presents a new platform called PCIV (intelligent platform for vehicular control) for traffic monitoring, based on radio frequency Identification (RFID) and cloud computing, applied to road traffic monitoring in public transportation systems. This paper shows the design approach and the experimental validation of the platform in two real scenarios: a university campus and a small city. Experiments demonstrated RFID technology is viable to be implemented to monitor traffic in smart cities.",
"title": ""
},
{
"docid": "dcdb6242febbef358efe5a1461957291",
"text": "Neuromorphic Engineering has emerged as an exciting research area, primarily owing to the paradigm shift from conventional computing architectures to data-driven, cognitive computing. There is a diversity of work in the literature pertaining to neuromorphic systems, devices and circuits. This review looks at recent trends in neuromorphic engineering and its sub-domains, with an attempt to identify key research directions that would assume significance in the future. We hope that this review would serve as a handy reference to both beginners and experts, and provide a glimpse into the broad spectrum of applications of neuromorphic hardware and algorithms. Our survey indicates that neuromorphic engineering holds a promising future, particularly with growing data volumes, and the imminent need for intelligent, versatile computing.",
"title": ""
},
{
"docid": "bd039cbb3b9640e917b9cc15e45e5536",
"text": "We introduce adversarial neural networks for representation learning as a novel approach to transfer learning in brain-computer interfaces (BCIs). The proposed approach aims to learn subject-invariant representations by simultaneously training a conditional variational autoencoder (cVAE) and an adversarial network. We use shallow convolutional architectures to realize the cVAE, and the learned encoder is transferred to extract subject-invariant features from unseen BCI users’ data for decoding. We demonstrate a proof-of-concept of our approach based on analyses of electroencephalographic (EEG) data recorded during a motor imagery BCI experiment.",
"title": ""
},
{
"docid": "32334cf8520dde6743aa66b4e35742ff",
"text": "LinKBase® is a biomedical ontology. Its hierarchical structure, coverage, use of operational, formal and linguistic relationships, combined with its underlying language technology, make it an excellent ontology to support Natural Language Processing and Understanding (NLP/NLU) and data integration applications. In this paper we will describe the structure and coverage of LinKBase®. In addition, we will discuss the editing of LinKBase® and how domain experts are guided by specific editing rules to ensure modeling quality and consistency. Finally, we compare the structure of LinKBase® to the structure of third party terminologies and ontologies and discuss the integration of these data sources into",
"title": ""
},
{
"docid": "764e5c5201217be1aa9e24ce4fa3760a",
"text": "Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Please do not copy or distribute without explicit permission of the authors. Abstract Customer defection or churn is a widespread phenomenon that threatens firms across a variety of industries with dramatic financial consequences. To tackle this problem, companies are developing sophisticated churn management strategies. These strategies typically involve two steps – ranking customers based on their estimated propensity to churn, and then offering retention incentives to a subset of customers at the top of the churn ranking. The implicit assumption is that this process would maximize firm's profits by targeting customers who are most likely to churn. However, current marketing research and practice aims at maximizing the correct classification of churners and non-churners. Profit from targeting a customer depends on not only a customer's propensity to churn, but also on her spend or value, her probability of responding to retention offers, as well as the cost of these offers. Overall profit of the firm also depends on the number of customers the firm decides to target for its retention campaign. We propose a predictive model that accounts for all these elements. Our optimization algorithm uses stochastic gradient boosting, a state-of-the-art numerical algorithm based on stage-wise gradient descent. It also determines the optimal number of customers to target. The resulting optimal customer ranking and target size selection leads to, on average, a 115% improvement in profit compared to current methods. Remarkably, the improvement in profit comes along with more prediction errors in terms of which customers will churn. However, the new loss function leads to better predictions where it matters the most for the company's profits. For a company like Verizon Wireless, this translates into a profit increase of at least $28 million from a single retention campaign, without any additional implementation cost.",
"title": ""
},
{
"docid": "0fdd7f5c5cd1225567e89b456ef25ea0",
"text": "In this work we propose a hierarchical approach for labeling semantic objects and regions in scenes. Our approach is reminiscent of early vision literature in that we use a decomposition of the image in order to encode relational and spatial information. In contrast to much existing work on structured prediction for scene understanding, we bypass a global probabilistic model and instead directly train a hierarchical inference procedure inspired by the message passing mechanics of some approximate inference procedures in graphical models. This approach mitigates both the theoretical and empirical difficulties of learning probabilistic models when exact inference is intractable. In particular, we draw from recent work in machine learning and break the complex inference process into a hierarchical series of simple machine learning subproblems. Each subproblem in the hierarchy is designed to capture the image and contextual statistics in the scene. This hierarchy spans coarse-to-fine regions and explicitly models the mixtures of semantic labels that may be present due to imperfect segmentation. To avoid cascading of errors and overfitting, we train the learning problems in sequence to ensure robustness to likely errors earlier in the inference sequence and leverage the stacking approach developed by Cohen et al.",
"title": ""
},
{
"docid": "314fba798c73569f6c8fa266821bac8e",
"text": "Core to integrated navigation systems is the concept of fusing noisy observations from GPS, Inertial Measurement Units (IMU), and other available sensors. The current industry standard and most widely used algorithm for this purpose is the extended Kalman filter (EKF) [6]. The EKF combines the sensor measurements with predictions coming from a model of vehicle motion (either dynamic or kinematic), in order to generate an estimate of the current navigational state (position, velocity, and attitude). This paper points out the inherent shortcomings in using the EKF and presents, as an alternative, a family of improved derivativeless nonlinear Kalman filters called sigma-point Kalman filters (SPKF). We demonstrate the improved state estimation performance of the SPKF by applying it to the problem of loosely coupled GPS/INS integration. A novel method to account for latency in the GPS updates is also developed for the SPKF (such latency compensation is typically inaccurate or not practical with the EKF). A UAV (rotor-craft) test platform is used to demonstrate the results. Performance metrics indicate an approximate 30% error reduction in both attitude and position estimates relative to the baseline EKF implementation.",
"title": ""
},
{
"docid": "33cab0ec47af5e40d64e34f8ffc7dd6f",
"text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.",
"title": ""
},
{
"docid": "97adb3a003347f579706cd01a762bdc9",
"text": "The Universal Serial Bus (USB) is an extremely popular interface standard for computer peripheral connections and is widely used in consumer Mass Storage Devices (MSDs). While current consumer USB MSDs provide relatively high transmission speed and are convenient to carry, the use of USB MSDs has been prohibited in many commercial and everyday environments primarily due to security concerns. Security protocols have been previously proposed and a recent approach for the USB MSDs is to utilize multi-factor authentication. This paper proposes significant enhancements to the three-factor control protocol that now makes it secure under many types of attacks including the password guessing attack, the denial-of-service attack, and the replay attack. The proposed solution is presented with a rigorous security analysis and practical computational cost analysis to demonstrate the usefulness of this new security protocol for consumer USB MSDs.",
"title": ""
},
{
"docid": "18c230517b8825b616907548829e341b",
"text": "The application of small Remotely-Controlled (R/C) aircraft for aerial photography presents many unique advantages over manned aircraft due to their lower acquisition cost, lower maintenance issue, and superior flexibility. The extraction of reliable information from these images could benefit DOT engineers in a variety of research topics including, but not limited to work zone management, traffic congestion, safety, and environmental. During this effort, one of the West Virginia University (WVU) R/C aircraft, named ‘Foamy’, has been instrumented for a proof-of-concept demonstration of aerial data acquisition. Specifically, the aircraft has been outfitted with a GPS receiver, a flight data recorder, a downlink telemetry hardware, a digital still camera, and a shutter-triggering device. During the flight a ground pilot uses one of the R/C channels to remotely trigger the camera. Several hundred high-resolution geo-tagged aerial photographs were collected during 10 flight experiments at two different flight fields. A Matlab based geo-reference software was developed for measuring distances from an aerial image and estimating the geo-location of each ground asset of interest. A comprehensive study of potential Sources of Errors (SOE) has also been performed with the goal of identifying and addressing various factors that might affect the position estimation accuracy. The result of the SOE study concludes that a significant amount of position estimation error was introduced by either mismatching of different measurements or by the quality of the measurements themselves. The first issue is partially addressed through the design of a customized Time-Synchronization Board (TSB) based on a MOD 5213 embedded microprocessor. The TSB actively controls the timing of the image acquisition process, ensuring an accurate matching of the GPS measurement and the image acquisition time. The second issue is solved through the development of a novel GPS/INS (Inertial Navigation System) based on a 9-state Extended Kalman Filter (EKF). The developed sensor fusion algorithm provides a good estimation of aircraft attitude angle without the need for using expensive sensors. Through the help of INS integration, it also provides a very smooth position estimation that eliminates large jumps typically seen in the raw GPS measurements.",
"title": ""
},
{
"docid": "84436fc1467a259e0e584da3af6f5ef7",
"text": "BACKGROUND\nMicroRNAs are short regulatory RNAs that negatively modulate protein expression at a post-transcriptional and/or translational level and are deeply involved in the pathogenesis of several types of cancers. Specifically, microRNA-221 (miR-221) is overexpressed in many human cancers, wherein accumulating evidence indicates that it functions as an oncogene. However, the function of miR-221 in human osteosarcoma has not been totally elucidated. In the present study, the effects of miR-221 on osteosarcoma and the possible mechanism by which miR-221 affected the survival, apoptosis, and cisplatin resistance of osteosarcoma were investigated.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nReal-time quantitative PCR analysis revealed miR-221 was significantly upregulated in osteosarcoma cell lines than in osteoblasts. Both human osteosarcoma cell lines SOSP-9607 and MG63 were transfected with miR-221 mimic or inhibitor to regulate miR-221 expression. The effects of miR-221 were then assessed by cell viability, cell cycle analysis, apoptosis assay, and cisplatin resistance assay. In both cells, upregulation of miR-221 induced cell survival and cisplatin resistance and reduced cell apoptosis. In addition, knockdown of miR-221 inhibited cell growth and cisplatin resistance and induced cell apoptosis. Potential target genes of miR-221 were predicted using bioinformatics. Moreover, luciferase reporter assay and western blot confirmed that PTEN was a direct target of miR-221. Furthermore, introduction of PTEN cDNA lacking 3'-UTR or PI3K inhibitor LY294002 abrogated miR-221-induced cisplatin resistance. Finally, both miR-221 and PTEN expression levels in osteosarcoma samples were examined by using real-time quantitative PCR and immunohistochemistry. High miR-221 expression level and inverse correlation between miR-221 and PTEN levels were revealed in osteosarcoma tissues.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThese results for the first time demonstrate that upregulation of miR-221 induces the malignant phenotype of human osteosarcoma whereas knockdown of miR-221 reverses this phenotype, suggesting that miR-221 could be a potential target for osteosarcoma treatment.",
"title": ""
}
] |
scidocsrr
|
a8ab75a5ab20fe1c2fccf0fe04c3dc29
|
Design and Kinematic Modeling of Constant Curvature Continuum Robots: A Review
|
[
{
"docid": "f11dbf9c32b126de695801957171465c",
"text": "Continuum robots, which are composed of multiple concentric, precurved elastic tubes, can provide dexterity at diameters equivalent to standard surgical needles. Recent mechanics-based models of these “active cannulas” are able to accurately describe the curve of the robot in free space, given the preformed tube curves and the linear and angular positions of the tube bases. However, in practical applications, where the active cannula must interact with its environment or apply controlled forces, a model that accounts for deformation under external loading is required. In this paper, we apply geometrically exact rod theory to produce a forward kinematic model that accurately describes large deflections due to a general collection of externally applied point and/or distributed wrench loads. This model accommodates arbitrarily many tubes, with each having a general preshaped curve. It also describes the independent torsional deformation of the individual tubes. Experimental results are provided for both point and distributed loads. Average tip error under load was 2.91 mm (1.5% - 3% of total robot length), which is similar to the accuracy of existing free-space models.",
"title": ""
}
] |
[
{
"docid": "e5020601a6e4b2c07868ffc0f84498ae",
"text": "We describe a combined nonlinear acoustic echo cancellation and residual echo suppression system. The echo canceler uses parallel Hammerstein branches consisting of fixed nonlinear basis functions and linear adaptive filters. The residual echo suppressor uses an Artificial Neural Network for modeling of the residual echo spectrum from spectral features computed from the far-end signal. We show that modeling nonlinear effects both in the echo canceler and in the echo suppressor leads to an increased performance of the combined system.",
"title": ""
},
{
"docid": "cd4d874d0428a61c27bdcadc752c7d68",
"text": "Recent advances in genome technologies and the ensuing outpouring of genomic information related to cancer have accelerated the convergence of discovery science and clinical medicine. Successful examples of translating cancer genomics into therapeutics and diagnostics reinforce its potential to make possible personalized cancer medicine. However, the bottlenecks along the path of converting a genome discovery into a tangible clinical endpoint are numerous and formidable. In this Perspective, we emphasize the importance of establishing the biological relevance of a cancer genomic discovery in realizing its clinical potential and discuss some of the major obstacles to moving from the bench to the bedside.",
"title": ""
},
{
"docid": "1186bb5c96eebc26ce781d45fae7768d",
"text": "Essential genes are required for the viability of an organism. Accurate and rapid identification of new essential genes is of substantial theoretical interest to synthetic biology and has practical applications in biomedicine. Fractals provide facilitated access to genetic structure analysis on a different scale. In this study, machine learning-based methods using solely fractal features are presented and the problem of predicting essential genes in bacterial genomes is evaluated. Six fractal features were investigated to learn the parameters of five supervised classification methods for the binary classification task. The optimal parameters of these classifiers are determined via grid-based searching technique. All the currently available identified genes from the database of essential genes were utilized to build the classifiers. The fractal features were proven to be more robust and powerful in the prediction performance. In a statistical sense, the ELM method shows superiority in predicting the essential genes. Non-parameter tests of the average AUC and ACC showed that the fractal feature is much better than other five compared features sets. Our approach is promising and convenient to identify new bacterial essential genes.",
"title": ""
},
{
"docid": "557451621286ecd4fbf21909ff88450f",
"text": "BACKGROUND\nMany studies have demonstrated that honey has antibacterial activity in vitro, and a small number of clinical case studies have shown that application of honey to severely infected cutaneous wounds is capable of clearing infection from the wound and improving tissue healing. Research has also indicated that honey may possess anti-inflammatory activity and stimulate immune responses within a wound. The overall effect is to reduce infection and to enhance wound healing in burns, ulcers, and other cutaneous wounds. The objective of the study was to find out the results of topical wound dressings in diabetic wounds with natural honey.\n\n\nMETHODS\nThe study was conducted at department of Orthopaedics, Unit-1, Liaquat University of Medical and Health Sciences, Jamshoro from July 2006 to June 2007. Study design was experimental. The inclusion criteria were patients of either gender with any age group having diabetic foot Wagner type I, II, III and II. The exclusion criteria were patients not willing for studies and who needed urgent amputation due to deteriorating illness. Initially all wounds were washed thoroughly and necrotic tissues removed and dressings with honey were applied and continued up to healing of wounds.\n\n\nRESULTS\nTotal number of patients was 12 (14 feet). There were 8 males (66.67%) and 4 females (33.33%), 2 cases (16.67%) were presented with bilateral diabetic feet. The age range was 35 to 65 years (46 +/- 9.07 years). Amputations of big toe in 3 patients (25%), second and third toe ray in 2 patients (16.67%) and of fourth and fifth toes at the level of metatarsophalengeal joints were done in 3 patients (25%). One patient (8.33%) had below knee amputation.\n\n\nCONCLUSION\nIn our study we observed excellent results in treating diabetic wounds with dressings soaked with natural honey. The disability of diabetic foot patients was minimized by decreasing the rate of leg or foot amputations and thus enhancing the quality and productivity of individual life.",
"title": ""
},
{
"docid": "831b153045d9afc8f92336b3ba8019c6",
"text": "The progress in the field of electronics and technology as well as the processing of signals coupled with advance in the use of computer technology has given the opportunity to record and analyze the bio-electric signals from the human body in real time that requires dealing with many challenges according to the nature of the signal and its frequency. This could be up to 1 kHz, in addition to the need to transfer data from more than one channel at the same time. Moreover, another challenge is a high sensitivity and low noise measurements of the acquired bio-electric signals which may be tens of micro volts in amplitude. For these reasons, a low power wireless Electromyography (EMG) data transfer system is designed in order to meet these challenging demands. In this work, we are able to develop an EMG analogue signal processing hardware, along with computer based supporting software. In the development of the EMG analogue signal processing hardware, many important issues have been addressed. Some of these issues include noise and artifact problems, as well as the bias DC current. The computer based software enables the user to analyze the collected EMG data and plot them on graphs for visual decision making. The work accomplished in this study enables users to use the surface EMG device for recording EMG signals for various purposes in movement analysis in medical diagnosis, rehabilitation sports medicine and ergonomics. Results revealed that the proposed system transmit and receive the signal without any losing in the information of signals.",
"title": ""
},
{
"docid": "f7c62753c37d83d089c5b1e910140ac4",
"text": "It is often desirable to determine if an image has been modified in any way from its original recording. The JPEG format affords engineers many implementation trade-offs which give rise to widely varying JPEG headers. We exploit these variations for image authentication. A camera signature is extracted from a JPEG image consisting of information about quantization tables, Huffman codes, thumbnails, and exchangeable image file format (EXIF). We show that this signature is highly distinct across 1.3 million images spanning 773 different cameras and cell phones. Specifically, 62% of images have a signature that is unique to a single camera, 80% of images have a signature that is shared by three or fewer cameras, and 99% of images have a signature that is unique to a single manufacturer. The signature of Adobe Photoshop is also shown to be unique relative to all 773 cameras. These signatures are simple to extract and offer an efficient method to establish the authenticity of a digital image.",
"title": ""
},
{
"docid": "494ed6efac81a9e8bbdbfa9f19a518d3",
"text": "We studied the possibilities of embroidered antenna-IC interconnections and contour antennas in passive ultrahigh-frequency radio-frequency identification textile tags. The tag antennas were patterned from metal-coated fabrics and embroidered with conductive yarn. The wireless performance of the tags with embroidered antenna-IC interconnections was evaluated through measurements, and the results were compared to identical tags, where the ICs were attached using regular conductive epoxy. Our results show that the textile tags with embroidered antenna-IC interconnections attained similar performance. In addition, the tags where only the borderlines of the antennas were embroidered showed excellent wireless performance.",
"title": ""
},
{
"docid": "60a3ba5263067030434db976e6e121db",
"text": "Background and Objective: Physical inactivity is the fourth leading risk factor for global mortality. Physical inactivity levels are rising in developing countries and Malaysia is of no exception. Malaysian Adult Nutrition Survey 2003 reported that the prevalence of physical inactivity was 39.7% and the prevalence was higher for women (42.6%) than men (36.7%). In Malaysia, the National Health and Morbidity Survey 2006 reported that 43.7% (5.5 million) of Malaysian adults were physically inactive. These statistics show that physically inactive is an important public health concern in Malaysia. College students have been found to have poor physical activity habits. The objective of this study was to identify the physical activity level among students of Asia Metropolitan University (AMU) in Malaysia.",
"title": ""
},
{
"docid": "d27ed8fd2acd0dad6436b7e98853239d",
"text": "a r t i c l e i n f o What are the psychological mechanisms that trigger habits in daily life? Two studies reveal that strong habits are influenced by context cues associated with past performance (e.g., locations) but are relatively unaffected by current goals. Specifically, performance contexts—but not goals—automatically triggered strongly habitual behaviors in memory (Experiment 1) and triggered overt habit performance (Experiment 2). Nonetheless, habits sometimes appear to be linked to goals because people self-perceive their habits to be guided by goals. Furthermore, habits of moderate strength are automatically influenced by goals, yielding a curvilinear, U-shaped relation between habit strength and actual goal influence. Thus, research that taps self-perceptions or moderately strong habits may find habits to be linked to goals. Introduction Having cast off the strictures of behaviorism, psychologists are showing renewed interest in the psychological processes that guide This interest is fueled partly by the recognition that automaticity is not a unitary construct. Hence, different kinds of automatic responses may be triggered and controlled in different ways (Bargh, 1994; Moors & De Houwer, 2006). However, the field has not yet converged on a common understanding of the psychological mechanisms that underlie habits. Habits can be defined as psychological dispositions to repeat past behavior. They are acquired gradually as people repeatedly respond in a recurring context (e.g., performance settings, action sequences, Wood & Neal, 2007, 2009). Most researchers agree that habits often originate in goal pursuit, given that people are likely to repeat actions that are rewarding or yield desired outcomes. In addition, habit strength is a continuum, with habits of weak and moderate strength performed with lower frequency and/or in more variable contexts than strong habits This consensus aside, it remains unclear how goals and context cues influence habit automaticity. Goals are motivational states that (a) define a valued outcome that (b) energizes and directs action (e.g., the goal of getting an A in class energizes late night studying; Förster, Liberman, & Friedman, 2007). In contrast, context cues for habits reflect features of the performance environment in which the response typically occurs (e.g., the college library as a setting for late night studying). Some prior research indicates that habits are activated automatically by goals (e.g., Aarts & Dijksterhuis, 2000), whereas others indicate that habits are activated directly by context cues, with minimal influence of goals In the present experiments, we first test the cognitive associations …",
"title": ""
},
{
"docid": "a691ec038ef76874afe0a2b67ff75d3e",
"text": "Uveitis is a general term for intraocular inflammation and includes a large number of clinical phenotypes. As a group of disorders, it is responsible for 10% of all registered blind patients under the age of 65 years. Immune-mediated uveitis may be associated with a systemic disease or may be localized to the eye. The pro-inflammatory cytokines interleukin (IL)-1beta, IL-2, IL-6, interferon-gamma and tumor necrosis factor-alpha have all been detected within the ocular fluids or tissues in the inflamed eye together with others, such as IL-4, IL-5, IL-10 and transforming growth factor-beta. The chemokines IL-8, monocyte chemoattractant protein-1, macrophage inflammatory protein (MIP)-1alpha, MIP-1beta and fractalkine are also thought to be involved in the associated inflammatory response. There have been a number of studies in recent years investigating cytokine profiles in different forms of uveitis with a view to determining what cytokines are important in the inflamed eye. This review attempts to present the current state of knowledge from in vitro and in vivo research on the inflammatory cytokines in intraocular inflammatory diseases.",
"title": ""
},
{
"docid": "6eb2c0e22ecc0816cb5f83292902d799",
"text": "In this paper, we demonstrate that Android malware can bypass all automated analysis systems, including AV solutions, mobile sandboxes, and the Google Bouncer. We propose a tool called Sand-Finger for the fingerprinting of Android-based analysis systems. By analyzing the fingerprints of ten unique analysis environments from different vendors, we were able to find characteristics in which all tested environments differ from actual hardware. Depending on the availability of an analysis system, malware can either behave benignly or load malicious code at runtime. We classify this group of malware as Divide-and-Conquer attacks that are efficiently obfuscated by a combination of fingerprinting and dynamic code loading. In this group, we aggregate attacks that work against dynamic as well as static analysis. To demonstrate our approach, we create proof-of-concept malware that surpasses up-to-date malware scanners for Android. We also prove that known malware samples can enter the Google Play Store by modifying them only slightly. Due to Android's lack of an API for malware scanning at runtime, it is impossible for AV solutions to secure Android devices against these attacks.",
"title": ""
},
{
"docid": "dda8427a6630411fc11e6d95dbff08b9",
"text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.",
"title": ""
},
{
"docid": "18b7dadfec8b02624b6adeb2a65d7223",
"text": "This paper provides a brief introduction to recent work in st atistical parsing and its applications. We highlight succes ses to date, remaining challenges, and promising future work.",
"title": ""
},
{
"docid": "6fbce446ceb871bc1d832ce8d06398af",
"text": "The 250 kW TRIGA Mark II research reactor, Vienna, operates since 7 March 1962. The initial criticality was achieved with the first core loading of 57 fuel elements (FE) of same type (Aluminium clad fuel with 20% enrichment). Later on due to fuel consumption SST clad 20% enriched FE (s) have been added to compensate the reactor core burn-up. In 1975 high enriched (HEU) TRIGA fuel (FLIP fuel = Fuel Lifetime Improvement Program) was introduced into the core. The addition of this FLIP fuel resulted in the current completely mixed core. Therefore the current core of the TRIGA reactor Vienna is operating with a completely mixed core using three different types of fuels with two categories of enrichments. This makes the reactor physics calculations very complicated. To calculate the current core, a Monte Carlo based radiation transport computer code MCNP5 was employed to develop the current core of the TRIGA reactor. The present work presents the MCNP model of the current core and its validation through two experiments performed on the reactor. The experimental results of criticality and reactivity distribution experiments confirm the current core model. As the basis of this paper is based on the long-term cooperation with our colleague Dr. Matjaz Ravnik we therefore devote this paper in his memory.",
"title": ""
},
{
"docid": "cf42ab9460b2665b6537d6172b4ef3fb",
"text": "Small drones are being utilized in monitoring, transport, safety and disaster management, and other domains. Envisioning that drones form autonomous networks incorporated into the air traffic, we describe a high-level architecture for the design of a collaborative aerial system consisting of drones with on-board sensors and embedded processing, sensing, coordination, and networking capabilities. We implement a multi-drone system consisting of quadcopters and demonstrate its potential in disaster assistance, search and rescue, and aerial monitoring. Furthermore, we illustrate design challenges and present potential solutions based on the lessons learned so far.",
"title": ""
},
{
"docid": "22bcd1d04c92bc6c108638df91997e9b",
"text": "State of the art automatic optimization of OpenCL applications focuses on improving the performance of individual compute kernels. Programmers address opportunities for inter-kernel optimization in specific applications by ad-hoc hand tuning: manually fusing kernels together. However, the complexity of interactions between host and kernel code makes this approach weak or even unviable for applications involving more than a small number of kernel invocations or a highly dynamic control flow, leaving substantial potential opportunities unexplored. It also leads to an over complex, hard to maintain code base. We present Helium, a transparent OpenCL overlay which discovers, manipulates and exploits opportunities for inter-and intra-kernel optimization. Helium is implemented as preloaded library and uses a delay-optimize-replay mechanism in which kernel calls are intercepted, collectively optimized, and then executed according to an improved execution plan. This allows us to benefit from composite optimizations, on large, dynamically complex applications, with no impact on the code base. Our results show that Helium obtains at least the same, and frequently even better performance, than carefully handtuned code. Helium outperforms hand-optimized code where the exact dynamic composition of compute kernel cannot be known statically. In these cases, we demonstrate speedups of up to 3x over unoptimized code and an average speedup of 1.4x over hand optimized code.",
"title": ""
},
{
"docid": "4413ef4f192d5061da7bf2baa82c9048",
"text": "We developed and piloted a program for first-grade students to promote development of legible handwriting and writing fluency. The Write Start program uses a coteaching model in which occupational therapists and teachers collaborate to develop and implement a handwriting-writing program. The small-group format with embedded individualized supports allows the therapist to guide and monitor student performance and provide immediate feedback. The 12-wk program was implemented with 1 class of 19 students. We administered the Evaluation of Children's Handwriting Test, Minnesota Handwriting Assessment, and Woodcock-Johnson Fluency and Writing Samples test at baseline, immediately after the Write Start program, and at the end of the school year. Students made large, significant gains in handwriting legibility and speed and in writing fluency that were maintained at 6-mo follow-up. The Write Start program appears to promote handwriting and writing skills in first-grade students and is ready for further study in controlled trials.",
"title": ""
},
{
"docid": "e4920839c6b2bcacd72cbce578f44f01",
"text": "The ability to predict the reliability of a software system early in its development, e.g., during architectural design, can help to improve the system's quality in a cost-effective manner. Existing architecture-level reliability prediction approaches focus on system-level reliability and assume that the reliabilities of individual components are known. In general, this assumption is unreasonable, making component reliability prediction an important missing ingredient in the current literature. Early prediction of component reliability is a challenging problem because of many uncertainties associated with components under development. In this paper we address these challenges in developing a software component reliability prediction framework. We do this by exploiting architectural models and associated analysis techniques, stochastic modeling approaches, and information sources available early in the development lifecycle. We extensively evaluate our framework to illustrate its utility as an early reliability prediction approach.",
"title": ""
},
{
"docid": "37501837b77c336d01f751a0a2fafd1d",
"text": "Brain-inspired Hyperdimensional (HD) computing emulates cognition tasks by computing with hypervectors rather than traditional numerical values. In HD, an encoder maps inputs to high dimensional vectors (hypervectors) and combines them to generate a model for each existing class. During inference, HD performs the task of reasoning by looking for similarities of the input hypervector and each pre-stored class hypervector However, there is not a unique encoding in HD which can perfectly map inputs to hypervectors. This results in low HD classification accuracy over complex tasks such as speech recognition. In this paper we propose MHD, a multi-encoder hierarchical classifier, which enables HD to take full advantages of multiple encoders without increasing the cost of classification. MHD consists of two HD stages: a main stage and a decider stage. The main stage makes use of multiple classifiers with different encoders to classify a wide range of input data. Each classifier in the main stage can trade between efficiency and accuracy by dynamically varying the hypervectors' dimensions. The decider stage, located before the main stage, learns the difficulty of the input data and selects an encoder within the main stage that will provide the maximum accuracy, while also maximizing the efficiency of the classification task. We test the accuracy/efficiency of the proposed MHD on speech recognition application. Our evaluation shows that MHD can provide a 6.6× improvement in energy efficiency and a 6.3× speedup, as compared to baseline single level HD.",
"title": ""
},
{
"docid": "06654ef57e96d2e7cd969d271240371d",
"text": "The construction industry has been facing a paradigm shift to (i) increase; productivity, efficiency, infrastructure value, quality and sustainability, (ii) reduce; lifecycle costs, lead times and duplications, via effective collaboration and communication of stakeholders in construction projects. Digital construction is a political initiative to address low productivity in the sector. This seeks to integrate processes throughout the entire lifecycle by utilising building information modelling (BIM) systems. The focus is to create and reuse consistent digital information by the stakeholders throughout the lifecycle. However, implementation and use of BIM systems requires dramatic changes in the current business practices, bring new challenges for stakeholders e.g., the emerging knowledge and skill gap. This paper reviews and discusses the status of implementation of the BIM systems around the globe and their implications to the industry. Moreover, based on the lessons learnt, it will provide a guide to tackle these challenges and to facilitate successful transition towards utilizing BIM systems in construction projects.",
"title": ""
}
] |
scidocsrr
|
21d4139eba13e645375c017caacb1d85
|
Using graded implicit feedback for bayesian personalized ranking
|
[
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
},
{
"docid": "f8ea6c873594b0971989cc462527ca97",
"text": "Recommender system aim at providing a personalized list of items ranked according to the preferences of the user, as such ranking methods are at the core of many recommendation algorithms. The topic of this tutorial focuses on the cutting-edge algorithmic development in the area of recommender systems. This tutorial will provide an in depth picture of the progress of ranking models in the field, summarizing the strengths and weaknesses of existing methods, and discussing open issues that could be promising for future research in the community. A qualitative and quantitative comparison between different models will be provided while we will also highlight recent developments in the areas of Reinforcement Learning.",
"title": ""
},
{
"docid": "f1d11ef2739e02af2a95cbc93036bf43",
"text": "Extended Collaborative Less-is-More Filtering xCLiMF is a learning to rank model for collaborative filtering that is specifically designed for use with data where information on the level of relevance of the recommendations exists, e.g. through ratings. xCLiMF can be seen as a generalization of the Collaborative Less-is-More Filtering (CLiMF) method that was proposed for top-N recommendations using binary relevance (implicit feedback) data. The key contribution of the xCLiMF algorithm is that it builds a recommendation model by optimizing Expected Reciprocal Rank, an evaluation metric that generalizes reciprocal rank in order to incorporate user feedback with multiple levels of relevance. Experimental results on real-world datasets show the effectiveness of xCLiMF, and also demonstrate its advantage over CLiMF when more than two levels of relevance exist in the data.",
"title": ""
}
] |
[
{
"docid": "7448defe73a531018b11ac4b4b38b4cb",
"text": "Calcium oxalate crystalluria is a problem of growing concern in dogs. A few reports have discussed acute kidney injury by oxalates in dogs, describing ultrastructural findings in particular. We evaluated the possibility of deposition of calcium oxalate crystals in renal tissue and its probable consequences. Six dogs were intravenously injected with 0.5 M potassium oxalate (KOx) for seven consecutive days. By the end of the experiment, ultrasonography revealed a significant increase in the renal mass and renal parenchymal echogenicity. Serum creatinine and blood urea nitrogen levels were gradually increased. The histopathological features of the kidneys were assessed by both light and electron microscopy, which showed CaOx crystal deposition accompanied by morphological changes in the renal tissue of KOx injected dogs. Canine renal oxalosis provides a good model to study the biological and pathological changes induced upon damage of renal tissue by KOx injection.",
"title": ""
},
{
"docid": "2476e67447d873c0698fce0b032e6d90",
"text": "The emerging paradigm of the Internet of Everything, along with the increasing demand of Internet services everywhere, results in a remarkable and continuous growth of the global Internet traffic. As a cost-effective Internet access solution, WiFi networks currently generate a major portion of the global Internet traffic. Furthermore, the number of WiFi public hotspots worldwide is expected to increase by more than sevenfold by 2018. To face this huge increase in the number of densely deployed WiFi networks, and the massive amount of data to be supported by these networks in indoor and outdoor environments, it is necessary to improve the current WiFi standard and define specifications for high efficiency wireless local area networks (HEWs). This paper presents potential techniques that can be applied for HEWs, in order to achieve the required performance in dense HEW deployment scenarios, as expected in the near future. The HEW solutions under consideration includes physical layer techniques, medium access control layer strategies, spatial frequency reuse schemes, and power saving mechanisms. To accurately assess a newly proposed HEW scheme, we discuss suitable evaluation methodologies, by defining simulation scenarios that represent future HEW usage models, performance metrics that reflect HEW user experience, traffic models for dominant HEW applications, and channel models for indoor and outdoor HEW deployments. Finally, we highlight open issues for future HEW research and development.",
"title": ""
},
{
"docid": "012bcbc6b5e7b8aaafd03f100489961c",
"text": "DNA is an attractive medium to store digital information. Here we report a storage strategy, called DNA Fountain, that is highly robust and approaches the information capacity per nucleotide. Using our approach, we stored a full computer operating system, movie, and other files with a total of 2.14 × 106 bytes in DNA oligonucleotides and perfectly retrieved the information from a sequencing coverage equivalent to a single tile of Illumina sequencing. We also tested a process that can allow 2.18 × 1015 retrievals using the original DNA sample and were able to perfectly decode the data. Finally, we explored the limit of our architecture in terms of bytes per molecule and obtained a perfect retrieval from a density of 215 petabytes per gram of DNA, orders of magnitude higher than previous reports.",
"title": ""
},
{
"docid": "cb26bb277afc6d521c4c5960b35ed77d",
"text": "We propose a novel algorithm for the segmentation and prerecognition of offline handwritten Arabic text. Our character segmentation method over-segments each word, and then removes extra breakpoints using knowledge of letter shapes. On a test set of 200 images, 92.3% of the segmentation points were detected correctly, with 5.1% instances of over-segmentation. The prerecognition component annotates each detected letter with shape information, to be used for recognition in future work.",
"title": ""
},
{
"docid": "5cba55a67ba27c39ad72e82608052ae1",
"text": "This letter presents a novel dual-band rectifier with extended power range (EPR) and an optimal incident RF power strategy in the settings where the available RF energy fluctuates considerably. It maintains high power conversion efficiency (PCE) in an ultra-wide input power range by adopting a pHEMT in the proposed topology. Simultaneous RF power incident mode is proposed and preferred to the traditional independent mode for multi-band harvesting. Measured results show that more than 30% PCE is obtained with input power ranging from -15 dBm to 20 dBm and peak PCE of 60% is maintained from 5 to 15 dBm. Positive power gain is achieved from -20 dBm to more than 10 dBm. Investigation about the effect of RF power incident ratio on dual-band harvesting's performance is presented and it provides a good reference for future multi-band harvesting system design.",
"title": ""
},
{
"docid": "eba769c6246b44d8ed7e5f08aac17731",
"text": "One hundred men, living in three villages in a remote region of the Eastern Highlands of Papua New Guinea were asked to judge the attractiveness of photographs of women who had undergone micrograft surgery to reduce their waist-to-hip ratios (WHRs). Micrograft surgery involves harvesting adipose tissue from the waist and reshaping the buttocks to produce a low WHR and an \"hourglass\" female figure. Men consistently chose postoperative photographs as being more attractive than preoperative photographs of the same women. Some women gained, and some lost weight, postoperatively, with resultant changes in body mass index (BMI). However, changes in BMI were not related to men's judgments of attractiveness. These results show that the hourglass female figure is rated as attractive by men living in a remote, indigenous community, and that when controlling for BMI, WHR plays a crucial role in their attractiveness judgments.",
"title": ""
},
{
"docid": "14d480e4c9256d0ef5e5684860ae4d7f",
"text": "Changes in land use and land cover (LULC) as well as climate are likely to affect the geographic distribution of malaria vectors and parasites in the coming decades. At present, malaria transmission is concentrated mainly in the Amazon basin where extensive agriculture, mining, and logging activities have resulted in changes to local and regional hydrology, massive loss of forest cover, and increased contact between malaria vectors and hosts. Employing presence-only records, bioclimatic, topographic, hydrologic, LULC and human population data, we modeled the distribution of malaria and two of its dominant vectors, Anopheles darlingi, and Anopheles nuneztovari s.l. in northern South America using the species distribution modeling platform Maxent. Results from our land change modeling indicate that about 70,000 km2 of forest land would be lost by 2050 and 78,000 km2 by 2070 compared to 2010. The Maxent model predicted zones of relatively high habitat suitability for malaria and the vectors mainly within the Amazon and along coastlines. While areas with malaria are expected to decrease in line with current downward trends, both vectors are predicted to experience range expansions in the future. Elevation, annual precipitation and temperature were influential in all models both current and future. Human population mostly affected An. darlingi distribution while LULC changes influenced An. nuneztovari s.l. distribution. As the region tackles the challenge of malaria elimination, investigations such as this could be useful for planning and management purposes and aid in predicting and addressing potential impediments to elimination.",
"title": ""
},
{
"docid": "ee9f21361d01a8c678fece3c425f35c2",
"text": "Probabilistic model-based clustering, based on nite mixtures of multivariate models, is a useful framework for clustering data in a statistical context. This general framework can be directly extended to clustering of sequential data, based on nite mixtures of sequential models. In this paper we consider the problem of tting mixture models where both multivariate and sequential observations are present. A general EM algorithm is discussed and experimental results demonstrated on simulated data. The problem is motivated by the practical problem of clustering individuals into groups based on both their static characteristics and their dynamic behavior.",
"title": ""
},
{
"docid": "69f4dc7729dd74642c7b66276c26a971",
"text": "Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of 1 â 1/e on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is 1/1+1/eâ 0.731. This conjecture remained open prior to this paper for over 30 years. In this paper we present a threshold-based algorithm for the prophet inequality with n iid distributions. Using a nontrivial and novel approach we show that our algorithm is a 0.738-approximation algorithm. By beating the bound of 1/1+1/e, this refutes the conjecture of Hill and Kertz. Moreover, we generalize our results to non-uniform distributions and discuss its applications in mechanism design.",
"title": ""
},
{
"docid": "b10447097f8d513795b4f4e08e1838d8",
"text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.",
"title": ""
},
{
"docid": "7531be3af1285a4c1c0b752d1ee45f52",
"text": "Given an undirected graph with weight for each vertex, the maximum weight clique problem is to find the clique of the maximum weight. Östergård proposed a fast exact algorithm for solving this problem. We show his algorithm is not efficient for very dense graphs. We propose an exact algorithm for the problem, which is faster than Östergård’s algorithm in case the graph is dense. We show the efficiency of our algorithm with some experimental results.",
"title": ""
},
{
"docid": "65d3d020ee63cdeb74cb3da159999635",
"text": "We investigated the effects of format of an initial test and whether or not students received corrective feedback on that test on a final test of retention 3 days later. In Experiment 1, subjects studied four short journal papers. Immediately after reading each paper, they received either a multiple choice (MC) test, a short answer (SA) test, a list of statements to read, or a filler task. The MC test, SA test, and list of statements tapped identical facts from the studied material. No feedback was provided during the initial tests. On a final test 3 days later (consisting of MC and SA questions), having had an intervening MC test led to better performance than an intervening SA test, but the intervening MC condition did not differ significantly from the read statements condition. To better equate exposure to test-relevant information, corrective feedback during the initial tests was introduced in Experiment 2. With feedback provided, having had an intervening SA test led to the best performance on the final test, suggesting that the more demanding the retrieval processes engendered by the intervening test, the greater the benefit to final retention. The practical application of these findings is that regular SA quizzes with feedback may be more effective in enhancing student learning than repeated presentation of target facts or taking an MC quiz.",
"title": ""
},
{
"docid": "c02a55b5a3536f3ab12c65dd0d3037ef",
"text": "The emergence of large-scale receptor-based systems has enabled applications to execute complex business logic over data generated from monitoring the physical world. An important functionality required by these applications is the detection and response to complex events, often in real-time. Bridging the gap between low-level receptor technology and such high-level needs of applications remains a significant challenge.We demonstrate our solution to this problem in the context of HiFi, a system we are building to solve the data management problems of large-scale receptor-based systems. Specifically, we show how HiFi generates simple events out of receptor data at its edges and provides high-functionality complex event processing mechanisms for sophisticated event detection using a real-world library scenario.",
"title": ""
},
{
"docid": "78e2311b0c40d055abc144d11926c831",
"text": "Intrusion Detection System is used to detect suspicious activities is one form of defense. However, the sheer size of the network logs makes human log analysis intractable. Furthermore, traditional intrusion detection methods based on pattern matching techniques cannot cope with the need for faster speed to manually update those patterns. Anomaly detection is used as a part of the intrusion detection system, which in turn use certain data mining techniques. Data mining techniques can be applied to the network data to detect possible intrusions. The foremost step in application of data mining techniques is the selection of appropriate features from the data. This paper aims to build an Intrusion Detection System that can detect known and unknown intrusion automatically. Under a data mining framework, the IDS are trained with statistical algorithm, named Chi-Square statistics. This study shows the plan, implementation and the analyze of these threats by using a Chi-Square statistic technique, in order to prevent these attacks and to make a Network Intrusion detection system (NIDS). This proposed model is used to detect anomaly-based network to see how effective this statistical technique in detecting intrusions.",
"title": ""
},
{
"docid": "73973ae6c858953f934396ab62276e0d",
"text": "The unsolicited bulk messages are widespread in the applications of short messages. Although the existing spam filters have satisfying performance, they are facing the challenge of an adversary who misleads the spam filters by manipulating samples. Until now, the vulnerability of spam filtering technique for short messages has not been investigated. Different from the other spam applications, a short message only has a few words and its length usually has an upper limit. The current adversarial learning algorithms may not work efficiently in short message spam filtering. In this paper, we investigate the existing good word attack and its counterattack method, i.e. the feature reweighting, in short message spam filtering in an effort to understand whether, and to what extent, they can work efficiently when the length of a message is limited. This paper proposes a good word attack strategy which maximizes the influence to a classifier with the least number of inserted characters based on the weight values and also the length of words. On the other hand, we also proposes the feature reweighting method with a new rescaling function which minimizes the importance of the feature representing a short word in order to require more inserted characters for a successful evasion. The methods are evaluated experimentally by using the SMS and the comment spam dataset. The results confirm that the length of words is a critical factor of the robustness of short message spam filtering to good word attack. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9223330ceb0b0575379c238672b8afc2",
"text": "Contact networks are often used in epidemiological studies to describe the patterns of interactions within a population. Often, such networks merely indicate which individuals interact, without giving any indication of the strength or intensity of interactions. Here, we use weighted networks, in which every connection has an associated weight, to explore the influence of heterogeneous contact strengths on the effectiveness of control measures. We show that, by using contact weights to evaluate an individual's influence on an epidemic, individual infection risk can be estimated and targeted interventions such as preventative vaccination can be applied effectively. We use a diary study of social mixing behaviour to indicate the patterns of contact weights displayed by a real population in a range of different contexts, including physical interactions; we use these data to show that considerations of link weight can in some cases lead to improved interventions in the case of infections that spread through close contact interactions. However, we also see that simpler measures, such as an individual's total number of social contacts or even just their number of contacts during a single day, can lead to great improvements on random vaccination. We therefore conclude that, for many infections, enhanced social contact data can be simply used to improve disease control but that it is not necessary to have full social mixing information in order to enhance interventions.",
"title": ""
},
{
"docid": "d5130b0353dd05e6a0e6e107c9b863e0",
"text": "We study Euler–Poincaré systems (i.e., the Lagrangian analogue of LiePoisson Hamiltonian systems) defined on semidirect product Lie algebras. We first give a derivation of the Euler–Poincaré equations for a parameter dependent Lagrangian by using a variational principle of Lagrange d’Alembert type. Then we derive an abstract Kelvin-Noether theorem for these equations. We also explore their relation with the theory of Lie-Poisson Hamiltonian systems defined on the dual of a semidirect product Lie algebra. The Legendre transformation in such cases is often not invertible; thus, it does not produce a corresponding Euler–Poincaré system on that Lie algebra. We avoid this potential difficulty by developing the theory of Euler–Poincaré systems entirely within the Lagrangian framework. We apply the general theory to a number of known examples, including the heavy top, ideal compressible fluids and MHD. We also use this framework to derive higher dimensional Camassa-Holm equations, which have many potentially interesting analytical properties. These equations are Euler-Poincaré equations for geodesics on diffeomorphism groups (in the sense of the Arnold program) but where the metric is H rather than L. ∗Research partially supported by NSF grant DMS 96–33161. †Research partially supported by NSF Grant DMS-9503273 and DOE contract DE-FG0395ER25245-A000.",
"title": ""
},
{
"docid": "0ed8212399f2e93017fde1c5819acb61",
"text": "This study examines the acceptance of technology and behavioral intention to use learning management systems (LMS). In specific, the aim of the research reported in this paper is to examine whether students ultimately accept LMSs such as eClass and the impact of behavioral intention on their decision to use them. An extended version of technology acceptance model has been proposed and used by employing one of the most reliable measures of perceived eased of use, the System Usability Scale. 345 university students participated in the study. The data analysis was based on partial least squares method. The majority of the research hypotheses were confirmed. In particular, social norm, system access and self-efficacy were found to significantly affect behavioral intention to use. As a result, it is suggested that e-learning developers and stakeholders should focus on these factors to increase acceptance and effectiveness of learning management systems.",
"title": ""
},
{
"docid": "1f4c0407c8da7b5fe685ad9763be937b",
"text": "As the dominant mobile computing platform, Android has become a prime target for cyber-security attacks. Many of these attacks are manifested at the application level, and through the exploitation of vulnerabilities in apps downloaded from the popular app stores. Increasingly, sophisticated attacks exploit the vulnerabilities in multiple installed apps, making it extremely difficult to foresee such attacks, as neither the app developers nor the store operators know a priori which apps will be installed together. This paper presents an approach that allows the end-users to safeguard a given bundle of apps installed on their device from such attacks. The approach, realized in a tool, called DROIDGUARD, combines static code analysis with lightweight formal methods to automatically infer security-relevant properties from a bundle of apps. It then uses a constraint solver to synthesize possible security exploits, from which fine-grained security policies are derived and automatically enforced to protect a given device. In our experiments with over 4,000 Android apps, DROIDGUARD has proven to be highly effective at detecting previously unknown vulnerabilities as well as preventing their exploitation.",
"title": ""
},
{
"docid": "e31ea6b8c4a5df049782b463abc602ea",
"text": "Nature plays a very important role to solve problems in a very effective and well-organized way. Few researchers are trying to create computational methods that can assist human to solve difficult problems. Nature inspired techniques like swarm intelligence, bio-inspired, physics/chemistry and many more have helped in solving difficult problems and also provide most favourable solution. Nature inspired techniques are wellmatched for soft computing application because parallel, dynamic and self organising behaviour. These algorithms motivated from the working group of social agents like ants, bees and insect. This paper is a complete survey of nature inspired techniques.",
"title": ""
}
] |
scidocsrr
|
a2c03a19b1e12da7fca66855a2266e6f
|
SQenloT: Semantic query engine for industrial Internet-of-Things gateways
|
[
{
"docid": "fdac9bbe4e92fedfcd237878afdefc90",
"text": "Pervasive and sensor-driven systems are by nature open and extensible, both in terms of input and tasks they are required to perform. Data streams coming from sensors are inherently noisy, imprecise and inaccurate, with di↵ering sampling rates and complex correlations with each other. These characteristics pose a significant challenge for traditional approaches to storing, representing, exchanging, manipulating and programming with sensor data. Semantic Web technologies provide a uniform framework for capturing these properties. O↵ering powerful representation facilities and reasoning techniques, these technologies are rapidly gaining attention towards facing a range of issues such as data and knowledge modelling, querying, reasoning, service discovery, privacy and provenance. This article reviews the application of the Semantic Web to pervasive and sensor-driven systems with a focus on information modelling and reasoning along with streaming data and uncertainty handling. The strengths and weaknesses of current and projected approaches are analysed and a roadmap is derived for using the Semantic Web as a platform, on which open, standard-based, pervasive, adaptive and sensor-driven systems can be deployed.",
"title": ""
}
] |
[
{
"docid": "60a4d92be550fb5f729359f472420c29",
"text": "A simple and effective technique for designing integrated planar Marchand balun is presented in this paper. The approach uses the physical transformer model to replace the lossy coupled transmission lines in a conventional Marchand balun design. As a demonstration and validation of the design approach, a Marchand balun using silicon-based integrated passive device (IPD) technology is carried out at a center frequency of 2.45 GHz. The measured results show low insertion loss and high balance property over a wide bandwidth for the implemented Marchand balun. Comparison among modeled, EM simulated and measured results shows good agreement.",
"title": ""
},
{
"docid": "40495cc96353f56481ed30f7f5709756",
"text": "This paper reported the construction of partial discharge measurement system under influence of cylindrical metal particle in transformer oil. The partial discharge of free cylindrical metal particle in the uniform electric field under AC applied voltage was studied in this paper. The partial discharge inception voltage (PDIV) for the single particle was measure to be 11kV. The typical waveform of positive PD and negative PD was also obtained. The result shows that the magnitude of negative PD is higher compared to positive PD. The observation on cylindrical metal particle movement revealed that there were a few stages of motion process involved.",
"title": ""
},
{
"docid": "8b1fa33cc90434abddf5458e05db0293",
"text": "The Stand-Alone Modula-2 System (SAM2S) is a portable, concurrent operating system and Modula-2 programming support environment. It is based on a highly modular kernel task running on single process-multiplexed microcomputers. SAM2S offers extensive network communication facilities. It provides the foundation for the locally resident portions of the MICROS distributed operating system for large netcomputers. SAM2S now supports a five-pass Modula-2 compiler, a task linker, link and load file decoders, a static symbolic debugger, a filer, and other utility tasks. SAM2S is currently running on each node of a network of DEC LSI-11/23 and Heurikon/Motorola 68000 workstations connected by an Ethernet. This paper reviews features of Modula-2 for operating system development and outlines the design of SAM2S with special emphasis on its modularity and communication flexibility. The two SAM2S implementations differ mainly in their peripheral drivers and in the large amount of memory available on the 68000 systems. Modula-2 has proved highly suitable for writing large, portable, concurrent and distributed operating systems.",
"title": ""
},
{
"docid": "13584c61e4caecf3828f2a11037f492e",
"text": "Privacy in social networks is a large and growing concern in recent times. It refers to various issues in a social network which include privacy of users, links, and their attributes. Each privacy component of a social network is vast and consists of various sub-problems. For example, user privacy includes multiple sub-problems like user location privacy, and user personal information privacy. This survey on privacy in social networks is intended to serve as an initial introduction and starting step to all further researchers. We present various privacy preserving models and methods include naive anonymization, perturbation, or building a complete alternative network. We show the work done by multiple researchers in the past, where social networks are stated as network graphs with users represented as nodes and friendship between users represented as links between the nodes. We study ways and mechanisms developed to protect these nodes and links in the network. We also review other systems proposed, along with all the available databases for future researchers in this area.",
"title": ""
},
{
"docid": "fcab229efac66654e418e4e23f49c099",
"text": "An adaptive and fast constant false alarm rate (CFAR) algorithm based on automatic censoring (AC) is proposed for target detection in high-resolution synthetic aperture radar (SAR) images. First, an adaptive global threshold is selected to obtain an index matrix which labels whether each pixel of the image is a potential target pixel or not. Second, by using the index matrix, the clutter environment can be determined adaptively to prescreen the clutter pixels in the sliding window used for detecting. The G 0 distribution, which can model multilook SAR images within an extensive range of degree of homogeneity, is adopted as the statistical model of clutter in this paper. With the introduction of AC, the proposed algorithm gains good CFAR detection performance for homogeneous regions, clutter edge, and multitarget situations. Meanwhile, the corresponding fast algorithm greatly reduces the computational load. Finally, target clustering is implemented to obtain more accurate target regions. According to the theoretical performance analysis and the experiment results of typical real SAR images, the proposed algorithm is shown to be of good performance and strong practicability.",
"title": ""
},
{
"docid": "664a759c81c6f2fbaa2941acfe1c34e4",
"text": "Convolutional highways are deep networks based on multiple stacked convolutional layers for feature preprocessing. We introduce an evolutionary algorithm (EA) for optimization of the structure and hyperparameters of convolutional highways and demonstrate the potential of this optimization setting on the well-known MNIST data set. The (1+1)-EA employs Rechenberg’s mutation rate control and a niching mechanism to overcome local optima adapts the optimization approach. An experimental study shows that the EA is capable of improving the state-of-the-art network contribution and of evolving highway networks from scratch.",
"title": ""
},
{
"docid": "7c0748301936c39166b9f91ba72d92ef",
"text": "methods and native methods are considered to be type safe if they do not override a final method. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(abstract, AccessFlags). methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(native, AccessFlags). private methods and static methods are orthogonal to dynamic method dispatch, so they never override other methods (§5.4.5). doesNotOverrideFinalMethod(class('java/lang/Object', L), Method) :isBootstrapLoader(L). doesNotOverrideFinalMethod(Class, Method) :isPrivate(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isStatic(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isNotPrivate(Method, Class), isNotStatic(Method, Class), doesNotOverrideFinalMethodOfSuperclass(Class, Method). doesNotOverrideFinalMethodOfSuperclass(Class, Method) :classSuperClassName(Class, SuperclassName), classDefiningLoader(Class, L), loadedClass(SuperclassName, L, Superclass), classMethods(Superclass, SuperMethodList), finalMethodNotOverridden(Method, Superclass, SuperMethodList). 4.10 Verification of class Files THE CLASS FILE FORMAT 202 final methods that are private and/or static are unusual, as private methods and static methods cannot be overridden per se. Therefore, if a final private method or a final static method is found, it was logically not overridden by another method. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isStatic(Method, Superclass). If a non-final private method or a non-final static method is found, skip over it because it is orthogonal to overriding. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isPrivate(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isStatic(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). THE CLASS FILE FORMAT Verification of class Files 4.10 203 If a non-final, non-private, non-static method is found, then indeed a final method was not overridden. Otherwise, recurse upwards. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isNotStatic(Method, Superclass), isNotPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), notMember(method(_, Name, Descriptor), SuperMethodList), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). 4.10 Verification of class Files THE CLASS FILE FORMAT 204 4.10.1.6 Type Checking Methods with Code Non-abstract, non-native methods are type correct if they have code and the code is type correct. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), methodAttributes(Method, Attributes), notMember(native, AccessFlags), notMember(abstract, AccessFlags), member(attribute('Code', _), Attributes), methodWithCodeIsTypeSafe(Class, Method). A method with code is type safe if it is possible to merge the code and the stack map frames into a single stream such that each stack map frame precedes the instruction it corresponds to, and the merged stream is type correct. The method's exception handlers, if any, must also be legal. methodWithCodeIsTypeSafe(Class, Method) :parseCodeAttribute(Class, Method, FrameSize, MaxStack, ParsedCode, Handlers, StackMap), mergeStackMapAndCode(StackMap, ParsedCode, MergedCode), methodInitialStackFrame(Class, Method, FrameSize, StackFrame, ReturnType), Environment = environment(Class, Method, ReturnType, MergedCode, MaxStack, Handlers), handlersAreLegal(Environment), mergedCodeIsTypeSafe(Environment, MergedCode, StackFrame). THE CLASS FILE FORMAT Verification of class Files 4.10 205 Let us consider exception handlers first. An exception handler is represented by a functor application of the form: handler(Start, End, Target, ClassName) whose arguments are, respectively, the start and end of the range of instructions covered by the handler, the first instruction of the handler code, and the name of the exception class that this handler is designed to handle. An exception handler is legal if its start (Start) is less than its end (End), there exists an instruction whose offset is equal to Start, there exists an instruction whose offset equals End, and the handler's exception class is assignable to the class Throwable. The exception class of a handler is Throwable if the handler's class entry is 0, otherwise it is the class named in the handler. An additional requirement exists for a handler inside an <init> method if one of the instructions covered by the handler is invokespecial of an <init> method. In this case, the fact that a handler is running means the object under construction is likely broken, so it is important that the handler does not swallow the exception and allow the enclosing <init> method to return normally to the caller. Accordingly, the handler is required to either complete abruptly by throwing an exception to the caller of the enclosing <init> method, or to loop forever. 4.10 Verification of class Files THE CLASS FILE FORMAT 206 handlersAreLegal(Environment) :exceptionHandlers(Environment, Handlers), checklist(handlerIsLegal(Environment), Handlers). handlerIsLegal(Environment, Handler) :Handler = handler(Start, End, Target, _), Start < End, allInstructions(Environment, Instructions), member(instruction(Start, _), Instructions), offsetStackFrame(Environment, Target, _), instructionsIncludeEnd(Instructions, End), currentClassLoader(Environment, CurrentLoader), handlerExceptionClass(Handler, ExceptionClass, CurrentLoader), isBootstrapLoader(BL), isAssignable(ExceptionClass, class('java/lang/Throwable', BL)), initHandlerIsLegal(Environment, Handler). instructionsIncludeEnd(Instructions, End) :member(instruction(End, _), Instructions). instructionsIncludeEnd(Instructions, End) :member(endOfCode(End), Instructions). handlerExceptionClass(handler(_, _, _, 0), class('java/lang/Throwable', BL), _) :isBootstrapLoader(BL). handlerExceptionClass(handler(_, _, _, Name), class(Name, L), L) :Name \\= 0. THE CLASS FILE FORMAT Verification of class Files 4.10 207 initHandlerIsLegal(Environment, Handler) :notInitHandler(Environment, Handler). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isNotInit(Method). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method), member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, MethodName, Descriptor), MethodName \\= '<init>'. initHandlerIsLegal(Environment, Handler) :isInitHandler(Environment, Handler), sublist(isApplicableInstruction(Target), Instructions, HandlerInstructions), noAttemptToReturnNormally(HandlerInstructions). isInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method). member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, '<init>', Descriptor). isApplicableInstruction(HandlerStart, instruction(Offset, _)) :Offset >= HandlerStart. noAttemptToReturnNormally(Instructions) :notMember(instruction(_, return), Instructions). noAttemptToReturnNormally(Instructions) :member(instruction(_, athrow), Instructions). 4.10 Verification of class Files THE CLASS FILE FORMAT 208 Let us now turn to the stream of instructions and stack map frames. Merging instructions and stack map frames into a single stream involves four cases: • Merging an empty StackMap and a list of instructions yields the original list of instructions. mergeStackMapAndCode([], CodeList, CodeList). • Given a list of stack map frames beginning with the type state for the instruction at Offset, and a list of instructions beginning at Offset, the merged list is the head of the stack map frame list, followed by the head of the instruction list, followed by the merge of the tails of the two lists. mergeStackMapAndCode([stackMap(Offset, Map) | RestMap], [instruction(Offset, Parse) | RestCode], [stackMap(Offset, Map), instruction(Offset, Parse) | RestMerge]) :mergeStackMapAndCode(RestMap, RestCode, RestMerge). • Otherwise, given a list of stack map frames beginning with the type state for the instruction at OffsetM, and a list of instructions beginning at OffsetP, then, if OffsetP < OffsetM, the merged list consists of the head of the instruction list, followed by the merge of the stack map frame list and the tail of the instruction list. mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], [instruction(OffsetP, Parse) | RestCode], [instruction(OffsetP, Parse) | RestMerge]) :OffsetP < OffsetM, mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], RestCode, RestMerge). • Otherwise, the merge of the two lists is undefined. Since the instruction list has monotonically increasing offsets, the merge of the two lists is not defined unless every stack map frame offset has a corresponding instruction offset and the stack map frames are in monotonically ",
"title": ""
},
{
"docid": "bddd2a1bec31d75892bce94f2b6b6387",
"text": "We present a real-time system for 3D head pose estimation and facial landmark localization using a commodity depth sensor. We introduce a novel triangular surface patch (TSP) descriptor, which encodes the shape of the 3D surface of the face within a triangular area. The proposed descriptor is viewpoint invariant, and it is robust to noise and to variations in the data resolution. Using a fast nearest neighbor lookup, TSP descriptors from an input depth map are matched to the most similar ones that were computed from synthetic head models in a training phase. The matched triangular surface patches in the training set are used to compute estimates of the 3D head pose and facial landmark positions in the input depth map. By sampling many TSP descriptors, many votes for pose and landmark positions are generated which together yield robust final estimates. We evaluate our approach on the publicly available Biwi Kinect Head Pose Database to compare it against state-of-the-art methods. Our results show a significant improvement in the accuracy of both pose and landmark location estimates while maintaining real-time speed.",
"title": ""
},
{
"docid": "7dba7b28582845bf13d9f9373e39a2af",
"text": "The Internet and social media provide a major source of information about people's opinions. Due to the rapidly growing number of online documents, it becomes both time-consuming and hard task to obtain and analyze the desired opinionated information. Sentiment analysis is the classification of sentiments expressed in documents. To improve classification perfromance feature selection methods which help to identify the most valuable features are generally applied. In this paper, we compare the performance of four feature selection methods namely Chi-square, Information Gain, Query Expansion Ranking, and Ant Colony Optimization using Maximum Entropi Modeling classification algorithm over Turkish Twitter dataset. Therefore, the effects of feature selection methods over the performance of sentiment analysis of Turkish Twitter data are evaluated. Experimental results show that Query Expansion Ranking and Ant Colony Optimization methods outperform other traditional feature selection methods for sentiment analysis.",
"title": ""
},
{
"docid": "dde2211bd3e9cceb20cce63d670ebc4c",
"text": "This paper presents the design of a 60 GHz phase shifter integrated with a low-noise amplifier (LNA) and power amplifier (PA) in a 65 nm CMOS technology for phased array systems. The 4-bit digitally controlled RF phase shifter is based on programmable weighted combinations of I/Q paths using digitally controlled variable gain amplifiers (VGAs). With the combination of an LNA, a phase shifter and part of a combiner, each receiver path achieves 7.2 dB noise figure, a 360° phase shift range in steps of approximately 22.5°, an average insertion gain of 12 dB at 61 GHz, a 3 dB-bandwidth of 5.5 GHz and dissipates 78 mW. Consisting of a phase shifter and a PA, one transmitter path achieves a maximum output power of higher than +8.3 dBm, a 360° phase shift range in 22.5° steps, an average insertion gain of 7.7 dB at 62 GHz, a 3 dB-bandwidth of 6.5 GHz and dissipates 168 mW.",
"title": ""
},
{
"docid": "0837c9af9b69367a5a6e32b2f72cef0a",
"text": "Machine learning techniques are increasingly being used in making relevant predictions and inferences on individual subjects neuroimaging scan data. Previous studies have mostly focused on categorical discrimination of patients and matched healthy controls and more recently, on prediction of individual continuous variables such as clinical scores or age. However, these studies are greatly hampered by the large number of predictor variables (voxels) and low observations (subjects) also known as the curse-of-dimensionality or small-n-large-p problem. As a result, feature reduction techniques such as feature subset selection and dimensionality reduction are used to remove redundant predictor variables and experimental noise, a process which mitigates the curse-of-dimensionality and small-n-large-p effects. Feature reduction is an essential step before training a machine learning model to avoid overfitting and therefore improving model prediction accuracy and generalization ability. In this review, we discuss feature reduction techniques used with machine learning in neuroimaging studies.",
"title": ""
},
{
"docid": "db907780a2022761d2595a8ad5d03401",
"text": "This letter is concerned with the stability analysis of neural networks (NNs) with time-varying interval delay. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the derivative of Lyapunov functional. As a result, some improved delay/interval-dependent stability criteria for NNs with time-varying interval delay are proposed. Numerical examples are given to demonstrate the effectiveness and the merits of the proposed method.",
"title": ""
},
{
"docid": "d91afc5fdd46796808016323fb7b9a29",
"text": "The objective of this study is presenting the causal modeling of intention to use technology among university student. Correlation is used as the method of research. Instrument of this study is standard questionnaire. The collected data is analyzed with AMOS software. The result indicate that facilitative condition, cognitive absorption, perceived enjoyment, perceived ease of use, and perceived usefulness have significant and direct effect on intention to use technology. Also, facilitative condition, cognitive absorption, perceived enjoyment, perceived ease of use and computer playfulness have significant and direct of effect on perceived usefulness. Facilitative condition, cognitive absorption, perceived enjoyment, and playfulness have significant and direct effect on perceived ease of use. [Hossien Zare,Sedigheh Yazdanparast. The causal Model of effective factors on Intention to use of information technology among payam noor and Traditional universities students. Life Sci J 2013;10(2):46-50]. (ISSN:1097-8135). http:www.lifesciencesite.com. 8",
"title": ""
},
{
"docid": "1e8caa9f0a189bafebd65df092f918bc",
"text": "For several decades, the role of hormone-replacement therapy (HRT) has been debated. Early observational data on HRT showed many benefits, including a reduction in coronary heart disease (CHD) and mortality. More recently, randomized trials, including the Women's Health Initiative (WHI), studying mostly women many years after the the onset of menopause, showed no such benefit and, indeed, an increased risk of CHD and breast cancer, which led to an abrupt decrease in the use of HRT. Subsequent reanalyzes of data from the WHI with age stratification, newer randomized and observational data and several meta-analyses now consistently show reductions in CHD and mortality when HRT is initiated soon after menopause. HRT also significantly decreases the incidence of various symptoms of menopause and the risk of osteoporotic fractures, and improves quality of life. In younger healthy women (aged 50–60 years), the risk–benefit balance is positive for using HRT, with risks considered rare. As no validated primary prevention strategies are available for younger women (<60 years of age), other than lifestyle management, some consideration might be given to HRT as a prevention strategy as treatment can reduce CHD and all-cause mortality. Although HRT should be primarily oestrogen-based, no particular HRT regimen can be advocated.",
"title": ""
},
{
"docid": "dcef528dbd89bc2c26820bdbe52c3d8d",
"text": "The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic anld industrial activity. Even when there is no shortage of textual materials on a particular topic, procedures for indexing or extracting the knowledge or conceptual information contained in them can be lacking. Recently developed information retrieval technologies are based on the concept of a vector space. Data are modeled as a matrix, and a user's query of the database is represented as a vector. Relevant documents in the database are then identified via simple vector operations. Orthogonal factorizations of the matrix provide mechanisms for handling uncertainty in the database itself. The purpose of this paper is to show how such fundamental mathematical concepts from linear algebra can be used to manage and index large text collections.",
"title": ""
},
{
"docid": "0eb75b719f523ca4e9be7fca04892249",
"text": "In this study 2,684 people evaluated the credibility of two live Web sites on a similar topic (such as health sites). We gathered the comments people wrote about each siteís credibility and analyzed the comments to find out what features of a Web site get noticed when people evaluate credibility. We found that the ìdesign lookî of the site was mentioned most frequently, being present in 46.1% of the comments. Next most common were comments about information structure and information focus. In this paper we share sample participant comments in the top 18 areas that people noticed when evaluating Web site credibility. We discuss reasons for the prominence of design look, point out how future studies can build on what we have learned in this new line of research, and outline six design implications for human-computer interaction professionals.",
"title": ""
},
{
"docid": "2bfe219ce52a44299178513d88721353",
"text": "This paper describes a spatio-temporal model of the human visual system (HVS) for video imaging applications, predicting the response of the neurons of the primary visual cortex. The model simulates the behavior of the HVS with a three-dimensional lter bank which decomposes the data into perceptual channels, each one being tuned to a speciic spatial frequency, orientation and temporal frequency. It further accounts for contrast sensitivity, inter-stimuli masking and spatio-temporal interaction. The free parameters of the model have been estimated by psychophysics. The model can then be used as the basis for many applications. As an example, a quality metric for coded video sequences is presented.",
"title": ""
},
{
"docid": "b94429b8f1a8bf06a4efe8305ecf430d",
"text": "Schizophrenia is a complex psychiatric disorder with a characteristic disease course and heterogeneous etiology. While substance use disorders and a family history of psychosis have individually been identified as risk factors for schizophrenia, it is less well understood if and how these factors are related. To address this deficiency, we examined the relationship between substance use disorders and family history of psychosis in a sample of 1219 unrelated patients with schizophrenia. The lifetime rate of substance use disorders in this sample was 50%, and 30% had a family history of psychosis. Latent class mixture modeling identified three distinct patient subgroups: (1) individuals with low probability of substance use disorders; (2) patients with drug and alcohol abuse, but no symptoms of dependence; and (3) patients with substance dependence. Substance use was related to being male, to a more severe disease course, and more acute symptoms at assessment, but not to an earlier age of onset of schizophrenia or a specific pattern of positive and negative symptoms. Furthermore, substance use in schizophrenia was not related to a family history of psychosis. The results suggest that substance use in schizophrenia is an independent risk factor for disease severity and onset.",
"title": ""
},
{
"docid": "57974e76bf29edb7c2ae54462aab839f",
"text": "UWB is a very attractive technology for many applications. It provides many advantages such as fine resolution and high power efficiency. Our interest in the current study is the use of UWB radar technique in microwave medical imaging systems, especially for early breast cancer detection. The Federal Communications Commission FCC allowed frequency bandwidth of 3.1 to 10.6 GHz for this purpose. In this paper we suggest an UWB Bowtie slot antenna with enhanced bandwidth. Effects of varying the geometry of the antenna on its performance and bandwidth are studied. The proposed antenna is simulated in CST Microwave Studio. Details of antenna design and simulation results such as return loss and radiation patterns are discussed in this paper. The final antenna structure exhibits good UWB characteristics and has surpassed the bandwidth requirements. Keywords—Ultra Wide Band (UWB), microwave imaging system, Bowtie antenna, return loss, impedance bandwidth enhancement.",
"title": ""
},
{
"docid": "ccfa5c06643cb3913b0813103a85e0b0",
"text": "We consider the problem of zero-shot recognition: learning a visual classifier for a category with zero training examples, just using the word embedding of the category and its relationship to other categories, which visual data are provided. The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar class. In this paper, we build upon the recently introduced Graph Convolutional Network (GCN) and propose an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers. Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category). After a series of graph convolutions, we predict the visual classifier for each category. During training, the visual classifiers for a few categories are given to learn the GCN parameters. At test time, these filters are used to predict the visual classifiers of unseen categories. We show that our approach is robust to noise in the KG. More importantly, our approach provides significant improvement in performance compared to the current state-of-the-art results (from 2 ~ 3% on some metrics to whopping 20% on a few).",
"title": ""
}
] |
scidocsrr
|
660337f1ab9ed1ab07ef473701a70bb4
|
Clothoid-based model predictive control for autonomous driving
|
[
{
"docid": "ccc4b8f75e39488068293540aeb508e2",
"text": "We present a novel approach to sketching 2D curves with minimally varying curvature as piecewise clothoids. A stable and efficient algorithm fits a sketched piecewise linear curve using a number of clothoid segments with G2 continuity based on a specified error tolerance. Further, adjacent clothoid segments can be locally blended to result in a G3 curve with curvature that predominantly varies linearly with arc length. We also handle intended sharp corners or G1 discontinuities, as independent rotations of clothoid pieces. Our formulation is ideally suited to conceptual design applications where aesthetic fairness of the sketched curve takes precedence over the precise interpolation of geometric constraints. We show the effectiveness of our results within a system for sketch-based road and robot-vehicle path design, where clothoids are already widely used.",
"title": ""
}
] |
[
{
"docid": "a7c79045bcbd9fac03015295324745e3",
"text": "Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.",
"title": ""
},
{
"docid": "e0079af0b45bf8d6fc194e59217e2a53",
"text": "Acral peeling skin syndrome (APSS) is an autosomal recessive skin disorder characterized by acral blistering and peeling of the outermost layers of the epidermis. It is caused by mutations in the gene for transglutaminase 5, TGM5. Here, we report on clinical and molecular findings in 11 patients and extend the TGM5 mutation database by four, to our knowledge, previously unreported mutations: p.M1T, p.L41P, p.L214CfsX15, and p.S604IfsX9. The recurrent mutation p.G113C was found in 9 patients, but also in 3 of 100 control individuals in a heterozygous state, indicating that APSS might be more widespread than hitherto expected. Using quantitative real-time PCR, immunoblotting, and immunofluorescence analysis, we demonstrate that expression and distribution of several epidermal differentiation markers and corneodesmosin (CDSN) is altered in APSS keratinocytes and skin. Although the expression of transglutaminases 1 and 3 was not changed, we found an upregulation of keratin 1, keratin 10, involucrin, loricrin, and CDSN, probably as compensatory mechanisms for stabilization of the epidermal barrier. Our results give insights into the consequences of TGM5 mutations on terminal epidermal differentiation.",
"title": ""
},
{
"docid": "54c66f2021f055d3fb09f733ab1c2c39",
"text": "In December 2013, sixteen teams from around the world gathered at Homestead Speedway near Miami, FL to participate in the DARPA Robotics Challenge (DRC) Trials, an aggressive robotics competition, partly inspired by the aftermath of the Fukushima Daiichi reactor incident. While the focus of the DRC Trials is to advance robotics for use in austere and inhospitable environments, the objectives of the DRC are to progress the areas of supervised autonomy and mobile manipulation for everyday robotics. NASA’s Johnson Space Center led a team comprised of numerous partners to develop Valkyrie, NASA’s first bipedal humanoid robot. Valkyrie is a 44 degree-of-freedom, series elastic actuator-based robot that draws upon over 18 years of humanoid robotics design heritage. Valkyrie’s application intent is aimed at not only responding to events like Fukushima, but also advancing human spaceflight endeavors in extraterrestrial planetary settings. This paper presents a brief system overview, detailing Valkyrie’s mechatronic subsystems, followed by a summarization of the inverse kinematics-based walking algorithm employed at the Trials. Next, the software and control architectures are highlighted along with a description of the operator interface tools. Finally, some closing remarks are given about the competition and a vision of future work is provided.",
"title": ""
},
{
"docid": "e3218926a5a32d2c44d5aea3171085e2",
"text": "The present study sought to determine the effects of Mindful Sport Performance Enhancement (MSPE) on runners. Participants were 25 recreational long-distance runners openly assigned to either the 4-week intervention or to a waiting-list control group, which later received the same program. Results indicate that the MSPE group showed significantly more improvement in organizational demands (an aspect of perfectionism) compared with controls. Analyses of preto postworkshop change found a significant increase in state mindfulness and trait awareness and decreases in sport-related worries, personal standards perfectionism, and parental criticism. No improvements in actual running performance were found. Regression analyses revealed that higher ratings of expectations and credibility of the workshop were associated with lower postworkshop perfectionism, more years running predicted higher ratings of perfectionism, and more life stressors predicted lower levels of worry. Findings suggest that MSPE may be a useful mental training intervention for improving mindfulness, sport-anxiety related worry, and aspects of perfectionism in long-distance runners.",
"title": ""
},
{
"docid": "e708fc43b5ac8abf8cc2707195e8a45e",
"text": "We develop analytical models for predicting the magnetic field distribution in Halbach magnetized machines. They are formulated in polar coordinates and account for the relative recoil permeability of the magnets. They are applicable to both internal and external rotor permanent-magnet machines with either an iron-cored or air-cored stator and/or rotor. We compare predicted results with those obtained by finite-element analyses and measurements. We show that the air-gap flux density varies significantly with the pole number and that an optimal combination of the magnet thickness and the pole number exists for maximum air-gap flux density, while the back iron can enhance the air-gap field and electromagnetic torque when the radial thickness of the magnet is small.",
"title": ""
},
{
"docid": "64687b4df5001e0bca42dc92e8e4915a",
"text": "The articles published in Landscape and Urban Planning during the past 16 years provide valuable insights into how humans interact with outdoor urban environments. This review paper explores the wide spectrum of human dimensions and issues, or human needs, addressed by 90 of these studies. As a basis for analysis, the major themes tapped by the findings were classified into two overarching groups containing three categories each. The Nature needs, directly linked with the physical features of the environmental setting, were categorized in terms of contact with nature, aesthetic preference, and recreation and play. The role of the environment is less immediate in the Human-interaction group, which includes the issues of social interaction, citizen participation in the design process, and community identity. Most significantly, the publications offer strong support for the important role nearby natural environments play in human well-being. Urban settings that provide nature contact are valuable not only in their own right, but also for meeting other needs in a manner unique to these more natural settings. In addition, although addressed in different ways, remarkable similarities exist concerning these six people requirements across diverse cultures and political systems. Urban residents worldwide express a desire for contact with nature and each other, attractive environments, places in which to recreate and play, privacy, a more active role in the design of their community, and a sense of community identity. The studies reviewed here offer continued evidence that the design of urban landscapes strongly influences the well-being and behavior of users and nearby inhabitants. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "db20d821e1a517c5996897b8653bf192",
"text": "Building on recent prior work that combines Google Street View (GSV) and crowdsourcing to remotely collect information on physical world accessibility, we present the first 'smart' system, Tohme, that combines machine learning, computer vision (CV), and custom crowd interfaces to find curb ramps remotely in GSV scenes. Tohme consists of two workflows, a human labeling pipeline and a CV pipeline with human verification, which are scheduled dynamically based on predicted performance. Using 1,086 GSV scenes (street intersections) from four North American cities and data from 403 crowd workers, we show that Tohme performs similarly in detecting curb ramps compared to a manual labeling approach alone (F- measure: 84% vs. 86% baseline) but at a 13% reduction in time cost. Our work contributes the first CV-based curb ramp detection system, a custom machine-learning based workflow controller, a validation of GSV as a viable curb ramp data source, and a detailed examination of why curb ramp detection is a hard problem along with steps forward.",
"title": ""
},
{
"docid": "5af801ca029fa3a0517ef9d32e7baab0",
"text": "Gender is one of the most common attributes used to describe an individual. It is used in multiple domains such as human computer interaction, marketing, security, and demographic reports. Research has been performed to automate the task of gender recognition in constrained environment using face images, however, limited attention has been given to gender classification in unconstrained scenarios. This work attempts to address the challenging problem of gender classification in multi-spectral low resolution face images. We propose a robust Class Representative Autoencoder model, termed as AutoGen for the same. The proposed model aims to minimize the intra-class variations while maximizing the inter-class variations for the learned feature representations. Results on visible as well as near infrared spectrum data for different resolutions and multiple databases depict the efficacy of the proposed model. Comparative results with existing approaches and two commercial off-the-shelf systems further motivate the use of class representative features for classification.",
"title": ""
},
{
"docid": "f6feb6789c0c9d2d5c354e73d2aaf9ad",
"text": "In this paper we present SimpleElastix, an extension of SimpleITK designed to bring the Elastix medical image registration library to a wider audience. Elastix is a modular collection of robust C++ image registration algorithms that is widely used in the literature. However, its command-line interface introduces overhead during prototyping, experimental setup, and tuning of registration algorithms. By integrating Elastix with SimpleITK, Elastix can be used as a native library in Python, Java, R, Octave, Ruby, Lua, Tcl and C# on Linux, Mac and Windows. This allows Elastix to intregrate naturally with many development environments so the user can focus more on the registration problem and less on the underlying C++ implementation. As means of demonstration, we show how to register MR images of brains and natural pictures of faces using minimal amount of code. SimpleElastix is open source, licensed under the permissive Apache License Version 2.0 and available at https://github.com/kaspermarstal/SimpleElastix.",
"title": ""
},
{
"docid": "e0f18f58aca88cd6486e2ca3365cfe76",
"text": "Given a query graph $$q$$ q and a data graph $$G$$ G , subgraph similarity matching is to retrieve all matches of $$q$$ q in $$G$$ G with the number of missing edges bounded by a given threshold $$\\epsilon $$ ϵ . Many works have been conducted to study the problem of subgraph similarity matching due to its ability to handle applications involved with noisy or erroneous graph data. In practice, a data graph can be extremely large, e.g., a web-scale graph containing hundreds of millions of vertices and billions of edges. The state-of-the-art approaches employ centralized algorithms to process the subgraph similarity queries, and thus, they are infeasible for such a large graph due to the limited computational power and storage space of a centralized server. To address this problem, in this paper, we investigate subgraph similarity matching for a web-scale graph deployed in a distributed environment. We propose distributed algorithms and optimization techniques that exploit the properties of subgraph similarity matching, so that we can well utilize the parallel computing power and lower the communication cost among the distributed data centers for query processing. Specifically, we first relax and decompose $$q$$ q into a minimum number of sub-queries. Next, we send each sub-query to conduct the exact matching in parallel. Finally, we schedule and join the exact matches to obtain final query answers. Moreover, our workload-balance strategy further speeds up the query processing. Our experimental results demonstrate the feasibility of our proposed approach in performing subgraph similarity matching over web-scale graph data.",
"title": ""
},
{
"docid": "50fdc7454c5590cfc4bf151a3637a99c",
"text": "Named Entity Recognition (NER) is the task of locating and classifying names in text. In previous work, NER was limited to a small number of predefined entity classes (e.g., people, locations, and organizations). However, NER on the Web is a far more challenging problem. Complex names (e.g., film or book titles) can be very difficult to pick out precisely from text. Further, the Web contains a wide variety of entity classes, which are not known in advance. Thus, hand-tagging examples of each entity class is impractical. This paper investigates a novel approach to the first step in Web NER: locating complex named entities in Web text. Our key observation is that named entities can be viewed as a species of multiword units, which can be detected by accumulating n-gram statistics over the Web corpus. We show that this statistical method’s F1 score is 50% higher than that of supervised techniques including Conditional Random Fields (CRFs) and Conditional Markov Models (CMMs) when applied to complex names. The method also outperforms CMMs and CRFs by 117% on entity classes absent from the training data. Finally, our method outperforms a semi-supervised CRF by 73%.",
"title": ""
},
{
"docid": "f5d92a445b2d4ecfc55393794258582c",
"text": "This paper presents a multi-modulus frequency divider (MMD) based on the Extended True Single-Phase Clock (E-TSPC) Logic. The MMD consists of four cascaded divide-by-2/3 E-TSPC cells. The basic functionality of the MMD and the E-TSPC 2/3 divider are explained. The whole design was implemented in an [0.13] m CMOS process from IBM. Simulation and measurement results of the MMD are shown. Measurement results indicates a maximum operating frequency of [10] GHz and a power consumption of [4] mW for each stage. These results are compared to other state of the art dual modulus E-TSPC dividers, showing the good position of this design relating to operating frequency and power consumption.",
"title": ""
},
{
"docid": "359b6308a6e6e3d6857cb6b4f59fd1bc",
"text": "Significant research has been devoted to detecting people in images and videos. In this paper we describe a human detection method that augments widely used edge-based features with texture and color information, providing us with a much richer descriptor set. This augmentation results in an extremely high-dimensional feature space (more than 170,000 dimensions). In such high-dimensional spaces, classical machine learning algorithms such as SVMs are nearly intractable with respect to training. Furthermore, the number of training samples is much smaller than the dimensionality of the feature space, by at least an order of magnitude. Finally, the extraction of features from a densely sampled grid structure leads to a high degree of multicollinearity. To circumvent these data characteristics, we employ Partial Least Squares (PLS) analysis, an efficient dimensionality reduction technique, one which preserves significant discriminative information, to project the data onto a much lower dimensional subspace (20 dimensions, reduced from the original 170,000). Our human detection system, employing PLS analysis over the enriched descriptor set, is shown to outperform state-of-the-art techniques on three varied datasets including the popular INRIA pedestrian dataset, the low-resolution gray-scale DaimlerChrysler pedestrian dataset, and the ETHZ pedestrian dataset consisting of full-length videos of crowded scenes.",
"title": ""
},
{
"docid": "e06433abc3fe0e25e65339e50746d50f",
"text": "Context: Current software systems have increasingly implemented context-aware adaptations to handle the diversity of conditions of their surrounding environment. Therefore, people are becoming used to a variety of context-aware software systems (CASS). This context-awareness brings challenges to the software construction and testing because the context is unpredictable and may change at any time. Therefore, software engineers need to consider the dynamic context changes while testing CASS. Different test case design techniques (TCDT) have been proposed to support the testing of CASS. However, to the best of our knowledge, there is no analysis of these proposals on the advantages, limitations and their effective support to context variation during testing. Objective: To gather empirical evidence on TCDT concerned with CASS by identifying, evaluating and synthesizing knowledge available in the literature. Method: To undertake a secondary study (quasi -Systematic Literature Review) on TCDT for CASS regarding their assessed quality characteristics, used coverage criteria, test type, and test technique. Results: From 833 primary studies published between 2004 and 2014, just 17 studies regard the design of test cases for CASS. Most of them focus on functional suitability. Furthermore, some of them take into account the changes in the context by providing specific test cases for each context configuration (static perspective) during the test execution. These 17 studies revealed five challenges affecting the design of test cases and 20 challenges regarding the testing of CASS. Besides, seven TCDT are not empirically evaluated. Conclusion: A few TCDT partially support the testing of CASS. However, it has not been observed evidence on any TCDT supporting the truly context-aware testing, which that can adapt the expected output based on the context variation (dynamic perspective) during the test execution. It is an open issue deserving greater attention from researchers to increase the testing coverage and ensure users confidence in CASS.",
"title": ""
},
{
"docid": "a0e4080652269445c6e36b76d5c8cd09",
"text": "Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4% and 75.8% on these two datasets, exceeding current state-of-the-art results by over 5% and approaching what we believe is the ceiling for performance on this task.1",
"title": ""
},
{
"docid": "334510797355ca654d01dc45b65693ef",
"text": "Liquid crystal displays (LCDs) hold a large share of the flat-panel display market because LCDs offer advantages such as low power consumption, low radiation, and good image quality. However, image defects, such as spotlight, uniformity, and Mura defects, can impair the quality of an LCD. This research examined human perceptions of region-Mura and used Response Time and subjective markdown price to indicate the various severity levels of region-Mura that appeared at different display locations. The results indicate that, within a specific Mura Level range, the Mura’s location has a considerable impact on perceived quality (p < 0.001). Mura on the centers of LCDs have more impact than Mura on the corners of LCDs. Not all peripheral Mura were considered to be equal; participants chose different price markdown prices for LCDs with Mura in lower corners than they chose for LCDs with Mura in upper corners. These findings suggest that a manufacturer should establish a scraping threshold for LCDs based on information regarding Mura location to avoid the production waste from scrapping those LCDs, and should rotate the panel to position the most severe Mura in the lower part of the display to obtain a better perceived quality. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "85719d4bc86c7c8bbe5799a716d6533b",
"text": "We propose Sparse Neural Network architectures that are based on random or structured bipartite graph topologies. Sparse architectures provide compression of the models learned and speed-ups of computations, they can also surpass their unstructured or fully connected counterparts. As we show, even more compact topologies of the so-called SNN (Sparse Neural Network) can be achieved with the use of structured graphs of connections between consecutive layers of neurons. In this paper, we investigate how the accuracy and training speed of the models depend on the topology and sparsity of the neural network. Previous approaches using sparcity are all based on fully connected neural network models and create sparcity during training phase, instead we explicitly define a sparse architectures of connections before the training. Building compact neural network models is coherent with empirical observations showing that there is much redundancy in learned neural network models. We show experimentally that the accuracy of the models learned with neural networks depends on ”expander-like” properties of the underlying topologies such as the spectral gap and algebraic connectivity rather than the density of the graphs of connections. 1 ar X iv :1 70 6. 05 68 3v 1 [ cs .L G ] 1 8 Ju n 20 17",
"title": ""
},
{
"docid": "84337f21721a6aaae65061f677d11678",
"text": "This paper deals with the implementation of a stochastic flash ADC with the presence of comparator metastability, in a field-programmable gate array. Stochastic flash ADC exploits comparator threshold variation and can be implemented with simple and highly digital structures. We show that such designs is also prone to comparator metastability, therefore we propose an averaging scheme as a simple means to handle the situation. Experimental results from a prototype system based on an FPGA is given which shows the effectiveness of the averaging technique, resulting in a maximum measured SNDR of 22.24 dB with a sampling rate of 98 kHz.",
"title": ""
},
{
"docid": "e21f4c327c0006196fde4cf53ed710a7",
"text": "To focus the efforts of security experts, the goals of this empirical study are to analyze which security vulnerabilities can be discovered by code review, identify characteristics of vulnerable code changes, and identify characteristics of developers likely to introduce vulnerabilities. Using a three-stage manual and automated process, we analyzed 267,046 code review requests from 10 open source projects and identified 413 Vulnerable Code Changes (VCC). Some key results include: (1) code review can identify common types of vulnerabilities; (2) while more experienced contributors authored the majority of the VCCs, the less experienced contributors' changes were 1.8 to 24 times more likely to be vulnerable; (3) the likelihood of a vulnerability increases with the number of lines changed, and (4) modified files are more likely to contain vulnerabilities than new files. Knowing which code changes are more prone to contain vulnerabilities may allow a security expert to concentrate on a smaller subset of submitted code changes. Moreover, we recommend that projects should: (a) create or adapt secure coding guidelines, (b) create a dedicated security review team, (c) ensure detailed comments during review to help knowledge dissemination, and (d) encourage developers to make small, incremental changes rather than large changes.",
"title": ""
},
{
"docid": "12363d704fcfe9fef767c5e27140c214",
"text": "The application range of UAVs (unmanned aerial vehicles) is expanding along with performance upgrades. Vertical take-off and landing (VTOL) aircraft has the merits of both fixed-wing and rotary-wing aircraft. Tail-sitting is the simplest way for the VTOL maneuver since it does not need extra actuators. However, conventional hovering control for a tail-sitter UAV is not robust enough against large disturbance such as a blast of wind, a bird strike, and so on. It is experimentally observed that the conventional quaternion feedback hovering control often fails to keep stability when the control compensates large attitude errors. This paper proposes a novel hovering control strategy for a tail-sitter VTOL UAV that increases stability against large disturbance. In order to verify the proposed hovering control strategy, simulations and experiments on hovering of the UAV are performed giving large attitude errors. The results show that the proposed control strategy successfully compensates initial large attitude errors keeping stability, while the conventional quaternion feedback controller fails.",
"title": ""
}
] |
scidocsrr
|
57f64cec1e90f515cf7dd268fb57366f
|
Integrating Stereo Vision with a CNN Tracker for a Person-Following Robot
|
[
{
"docid": "e14d1f7f7e4f7eaf0795711fb6260264",
"text": "In this paper, we treat tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames. Given a set of examples, we train convolutional neural networks (CNNs) to perform the above estimation task. Different from other learning methods, the CNNs learn both spatial and temporal features jointly from image pairs of two adjacent frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences.",
"title": ""
},
{
"docid": "f25dfc98473b09744d237d85d9aec0b5",
"text": "Object tracking has been one of the most important and active research areas in the field of computer vision. A large number of tracking algorithms have been proposed in recent years with demonstrated success. However, the set of sequences used for evaluation is often not sufficient or is sometimes biased for certain types of algorithms. Many datasets do not have common ground-truth object positions or extents, and this makes comparisons among the reported quantitative results difficult. In addition, the initial conditions or parameters of the evaluated tracking algorithms are not the same, and thus, the quantitative results reported in literature are incomparable or sometimes contradictory. To address these issues, we carry out an extensive evaluation of the state-of-the-art online object-tracking algorithms with various evaluation criteria to understand how these methods perform within the same framework. In this work, we first construct a large dataset with ground-truth object positions and extents for tracking and introduce the sequence attributes for the performance analysis. Second, we integrate most of the publicly available trackers into one code library with uniform input and output formats to facilitate large-scale performance evaluation. Third, we extensively evaluate the performance of 31 algorithms on 100 sequences with different initialization settings. By analyzing the quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.",
"title": ""
}
] |
[
{
"docid": "dd06c1c39e9b4a1ae9ee75c3251f27dc",
"text": "Magnetoencephalographic measurements (MEG) were used to examine the effect on the human auditory cortex of removing specific frequencies from the acoustic environment. Subjects listened for 3 h on three consecutive days to music \"notched\" by removal of a narrow frequency band centered on 1 kHz. Immediately after listening to the notched music, the neural representation for a 1-kHz test stimulus centered on the notch was found to be significantly diminished compared to the neural representation for a 0.5-kHz control stimulus centered one octave below the region of notching. The diminished neural representation for 1 kHz reversed to baseline between the successive listening sessions. These results suggest that rapid changes can occur in the tuning of neurons in the adult human auditory cortex following manipulation of the acoustic environment. A dynamic form of neural plasticity may underlie the phenomenon observed here.",
"title": ""
},
{
"docid": "c197e1ab49287fc571f2a99a9501bf84",
"text": "X-rays are commonly performed imaging tests that use small amounts of radiation to produce pictures of the organs, tissues, and bones of the body. X-rays of the chest are used to detect abnormalities or diseases of the airways, blood vessels, bones, heart, and lungs. In this work we present a stochastic attention-based model that is capable of learning what regions within a chest X-ray scan should be visually explored in order to conclude that the scan contains a specific radiological abnormality. The proposed model is a recurrent neural network (RNN) that learns to sequentially sample the entire X-ray and focus only on informative areas that are likely to contain the relevant information. We report on experiments carried out with more than 100, 000 X-rays containing enlarged hearts or medical devices. The model has been trained using reinforcement learning methods to learn task-specific policies.",
"title": ""
},
{
"docid": "ad1000d0975bb0c605047349267c5e47",
"text": "A systematic review of randomized clinical trials was conducted to evaluate the acceptability and usefulness of computerized patient education interventions. The Columbia Registry, MEDLINE, Health, BIOSIS, and CINAHL bibliographic databases were searched. Selection was based on the following criteria: (1) randomized controlled clinical trials, (2) educational patient-computer interaction, and (3) effect measured on the process or outcome of care. Twenty-two studies met the selection criteria. Of these, 13 (59%) used instructional programs for educational intervention. Five studies (22.7%) tested information support networks, and four (18%) evaluated systems for health assessment and history-taking. The most frequently targeted clinical application area was diabetes mellitus (n = 7). All studies, except one on the treatment of alcoholism, reported positive results for interactive educational intervention. All diabetes education studies, in particular, reported decreased blood glucose levels among patients exposed to this intervention. Computerized educational interventions can lead to improved health status in several major areas of care, and appear not to be a substitute for, but a valuable supplement to, face-to-face time with physicians.",
"title": ""
},
{
"docid": "ccedb6cff054254f3427ab0d45017d2a",
"text": "Traffic and power generation are the main sources of urban air pollution. The idea that outdoor air pollution can cause exacerbations of pre-existing asthma is supported by an evidence base that has been accumulating for several decades, with several studies suggesting a contribution to new-onset asthma as well. In this Series paper, we discuss the effects of particulate matter (PM), gaseous pollutants (ozone, nitrogen dioxide, and sulphur dioxide), and mixed traffic-related air pollution. We focus on clinical studies, both epidemiological and experimental, published in the previous 5 years. From a mechanistic perspective, air pollutants probably cause oxidative injury to the airways, leading to inflammation, remodelling, and increased risk of sensitisation. Although several pollutants have been linked to new-onset asthma, the strength of the evidence is variable. We also discuss clinical implications, policy issues, and research gaps relevant to air pollution and asthma.",
"title": ""
},
{
"docid": "02df2dde321bb81220abdcff59418c66",
"text": "Monitoring aquatic debris is of great interest to the ecosystems, marine life, human health, and water transport. This paper presents the design and implementation of SOAR - a vision-based surveillance robot system that integrates an off-the-shelf Android smartphone and a gliding robotic fish for debris monitoring. SOAR features real-time debris detection and coverage-based rotation scheduling algorithms. The image processing algorithms for debris detection are specifically designed to address the unique challenges in aquatic environments. The rotation scheduling algorithm provides effective coverage of sporadic debris arrivals despite camera's limited angular view. Moreover, SOAR is able to dynamically offload computation-intensive processing tasks to the cloud for battery power conservation. We have implemented a SOAR prototype and conducted extensive experimental evaluation. The results show that SOAR can accurately detect debris in the presence of various environment and system dynamics, and the rotation scheduling algorithm enables SOAR to capture debris arrivals with reduced energy consumption.",
"title": ""
},
{
"docid": "910c42c4737d38db592f7249c2e0d6d2",
"text": "This document presents the Enterprise Ontology a collection of terms and de nitions relevant to business enterprises It was developed as part of the Enterprise Project a collaborative e ort to provide a framework for enterprise modelling The Enterprise Ontology will serve as a basis for this framework which includes methods and a computer toolset for enterprise modelling We give an overview of the Enterprise Project elaborate on the intended use of the Ontology and discuss the process we went through to build it The scope of the Enterprise Ontology is limited to those core concepts required for the project however it is expected that it will appeal to a wider audience It should not be considered static during the course of the project the Enterprise Ontology will be further re ned and extended",
"title": ""
},
{
"docid": "e6dcae244f91dc2d7e843d9860ac1cfd",
"text": "After Disney's Michael Eisner, Miramax's Harvey Weinstein, and Hewlett-Packard's Carly Fiorina fell from their heights of power, the business media quickly proclaimed thatthe reign of abrasive, intimidating leaders was over. However, it's premature to proclaim their extinction. Many great intimidators have done fine for a long time and continue to thrive. Their modus operandi runs counter to a lot of preconceptions about what it takes to be a good leader. They're rough, loud, and in your face. Their tactics include invading others' personal space, staging tantrums, keeping people guessing, and possessing an indisputable command of facts. But make no mistake--great intimidators are not your typical bullies. They're driven by vision, not by sheer ego or malice. Beneath their tough exteriors and sharp edges are some genuine, deep insights into human motivation and organizational behavior. Indeed, these leaders possess political intelligence, which can make the difference between paralysis and successful--if sometimes wrenching--organizational change. Like socially intelligent leaders, politically intelligent leaders are adept at sizing up others, but they notice different things. Those with social intelligence assess people's strengths and figure out how to leverage them; those with political intelligence exploit people's weaknesses and insecurities. Despite all the obvious drawbacks of working under them, great intimidators often attract the best and brightest. And their appeal goes beyond their ability to inspire high performance. Many accomplished professionals who gravitate toward these leaders want to cultivate a little \"inner intimidator\" of their own. In the author's research, quite a few individuals reported having positive relationships with intimidating leaders. In fact, some described these relationships as profoundly educational and even transformational. So before we throw out all the great intimidators, the author argues, we should stop to consider what we would lose.",
"title": ""
},
{
"docid": "eb6ee2fd1f7f1d0d767e4dde2d811bed",
"text": "This paper presents a mathematical framework for performance analysis of Behavior Trees (BTs). BTs are a recent alternative to Finite State Machines (FSMs), for doing modular task switching in robot control architectures. By encoding the switching logic in a tree structure, instead of distributing it in the states of a FSM, modularity and reusability are improved. In this paper, we compute performance measures, such as success/failure probabilities and execution times, for plans encoded and executed by BTs. To do this, we first introduce Stochastic Behavior Trees (SBT), where we assume that the probabilistic performance measures of the basic action controllers are given. We then show how Discrete Time Markov Chains (DTMC) can be used to aggregate these measures from one level of the tree to the next. The recursive structure of the tree then enables us to step by step propagate such estimates from the leaves (basic action controllers) to the root (complete task execution). Finally, we verify our analytical results using massive Monte Carlo simulations, and provide an illustrative example of the results for a complex robotic task.",
"title": ""
},
{
"docid": "e0f797ff66a81b88bbc452e86864d7bc",
"text": "A key challenge in radar micro-Doppler classification is the difficulty in obtaining a large amount of training data due to costs in time and human resources. Small training datasets limit the depth of deep neural networks (DNNs), and, hence, attainable classification accuracy. In this work, a novel method for diversifying Kinect-based motion capture (MOCAP) simulations of human micro-Doppler to span a wider range of potential observations, e.g. speed, body size, and style, is proposed. By applying three transformations, a small set of MOCAP measurements is expanded to generate a large training dataset for network initialization of a 30-layer deep residual neural network. Results show that the proposed training methodology and residual DNN yield improved bottleneck feature performance and the highest overall classification accuracy among other DNN architectures, including transfer learning from the 1.5 million sample ImageNet database.",
"title": ""
},
{
"docid": "c41efa28806b3ac3d2b23d9e52b85193",
"text": "The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.",
"title": ""
},
{
"docid": "3b97d25d0a0e07d4b4fccc64ff251cce",
"text": "Consider a centralized hierarchical cloud-based multimedia system (CMS) consisting of a resource manager, cluster heads, and server clusters, in which the resource manager assigns clients' requests for multimedia service tasks to server clusters according to the task characteristics, and then each cluster head distributes the assigned task to the servers within its server cluster. For such a complicated CMS, however, it is a research challenge to design an effective load balancing algorithm that spreads the multimedia service task load on servers with the minimal cost for transmitting multimedia data between server clusters and clients, while the maximal load limit of each server cluster is not violated. Unlike previous work, this paper takes into account a more practical dynamic multiservice scenario in which each server cluster only handles a specific type of multimedia task, and each client requests a different type of multimedia service at a different time. Such a scenario can be modelled as an integer linear programming problem, which is computationally intractable in general. As a consequence, this paper further solves the problem by an efficient genetic algorithm with an immigrant scheme, which has been shown to be suitable for dynamic problems. Simulation results demonstrate that the proposed genetic algorithm can efficiently cope with dynamic multiservice load balancing in CMS.",
"title": ""
},
{
"docid": "f4b5a2584833466fa26da00b07a7f261",
"text": "This paper describes the development of the technology threat avoidance theory (TTAT), which explains individual IT users’ behavior of avoiding the threat of malicious information technologies. We articulate that avoidance and adoption are two qualitatively different phenomena and contend that technology acceptance theories provide a valuable, but incomplete, understanding of users’ IT threat avoidance behavior. Drawing from cybernetic theory and coping theory, TTAT delineates the avoidance behavior as a dynamic positive feedback loop in which users go through two cognitive processes, threat appraisal and coping appraisal, to decide how to cope with IT threats. In the threat appraisal, users will perceive an IT threat if they believe that they are susceptible Alan Dennis was the accepting senior editor for this paper. to malicious IT and that the negative consequences are severe. The threat perception leads to coping appraisal, in which users assess the degree to which the IT threat can be avoided by taking safeguarding measures based on perceived effectiveness and costs of the safeguarding measure and selfefficacy of taking the safeguarding measure. TTAT posits that users are motivated to avoid malicious IT when they perceive a threat and believe that the threat is avoidable by taking safeguarding measures; if users believe that the threat cannot be fully avoided by taking safeguarding measures, they would engage in emotion-focused coping. Integrating process theory and variance theory, TTAT enhances our understanding of human behavior under IT threats and makes an important contribution to IT security research and practice.",
"title": ""
},
{
"docid": "d485607db19e3defa000b24a59b1074a",
"text": "In the past years we have witnessed an explosive growth of the data and information on the World Wide Web, which makes it difficult for normal users to find the information that they are interested in. On the other hand, the majority of the data and resources are very unpopular, which can be considered as “hidden information”, and are very difficult to find. By building a bridge between the users and the objects and constructing their similarities, the Personal Recommender System (PRS) can recommend the objects that the users are potentially interested in. PRS plays an important role in not only social and economic life but also scientific analysis. The interdisciplinary PRS attracts attention from the communities of information science, computational mathematics, statistical physics, management science, and consumer behaviors, etc. In fact, PRS is one of the most efficient tools to solve the information overload problem. According to the recommendation algorithms, we introduce four typical systems, including the collaborating filtering system, the content-based system, the structure-based system, and the hybrid system. In addition, some improved algorithms are proposed to overcome the limitations of traditional systems. This review article may shed some light on the study of PRS from different backgrounds.",
"title": ""
},
{
"docid": "fad8cf15678cccbc727e9fba6292474d",
"text": "OBJECTIVE\nClinical records contain significant medical information that can be useful to researchers in various disciplines. However, these records also contain personal health information (PHI) whose presence limits the use of the records outside of hospitals. The goal of de-identification is to remove all PHI from clinical records. This is a challenging task because many records contain foreign and misspelled PHI; they also contain PHI that are ambiguous with non-PHI. These complications are compounded by the linguistic characteristics of clinical records. For example, medical discharge summaries, which are studied in this paper, are characterized by fragmented, incomplete utterances and domain-specific language; they cannot be fully processed by tools designed for lay language.\n\n\nMETHODS AND RESULTS\nIn this paper, we show that we can de-identify medical discharge summaries using a de-identifier, Stat De-id, based on support vector machines and local context (F-measure=97% on PHI). Our representation of local context aids de-identification even when PHI include out-of-vocabulary words and even when PHI are ambiguous with non-PHI within the same corpus. Comparison of Stat De-id with a rule-based approach shows that local context contributes more to de-identification than dictionaries combined with hand-tailored heuristics (F-measure=85%). Comparison with two well-known named entity recognition (NER) systems, SNoW (F-measure=94%) and IdentiFinder (F-measure=36%), on five representative corpora show that when the language of documents is fragmented, a system with a relatively thorough representation of local context can be a more effective de-identifier than systems that combine (relatively simpler) local context with global context. Comparison with a Conditional Random Field De-identifier (CRFD), which utilizes global context in addition to the local context of Stat De-id, confirms this finding (F-measure=88%) and establishes that strengthening the representation of local context may be more beneficial for de-identification than complementing local with global context.",
"title": ""
},
{
"docid": "16a6c26d6e185be8383c062c6aa620f8",
"text": "In this research, we suggested a vision-based traffic accident detection system for automatically detecting, recording, and reporting traffic accidents at intersections. This model first extracts the vehicles from the video image of CCD camera, tracks the moving vehicles, and extracts features such as the variation rate of the velocity, position, area, and direction of moving vehicles. The model then makes decisions on the traffic accident based on the extracted features. And we suggested and designed the metadata registry for the system to improve the interoperability. In the field test, 4 traffic accidents were detected and recorded by the system. The video clips are invaluable for intersection safety analysis.",
"title": ""
},
{
"docid": "da5f44562df4d13f2f8687344d4c4fd0",
"text": "Location finding by using wireless technology is one of the emerging and important technologies of wireless sensor networks. GPS can be utilized for outdoor areas only it cannot be used for tracking the user inside the building. The main motivation of this paper is to implement the system which can locate and track the user inside the building. Indoor locations include buildings like an airport, huge malls, supermarkets, universities and large infrastructures. The significant problem that this system solves is of tracking the user inside the building. The accurate indoor location can be found out by using the Received Signal Strength Indication (RSSI). The additional hardware is not required for RSSI, and moreover, it is easy to understand. The RSS (Received Signal Strength) values are calculated with the help of WiFi Access points and the mobile device. The system should provide the exact location of the user and also track the user. This paper presents a system that helps in finding out the exact location and tracking of the mobile device in the indoor environment. It can also be used to navigate the user to a required destination using the navigation function.",
"title": ""
},
{
"docid": "60e16b0c5bff9f7153c64a38193b8759",
"text": "The “Flash Crash” of May 6th, 2010 comprised an unprecedented 1,000 point, five-minute decline in the Dow Jones Industrial Average that was followed by a rapid, disorderly recovery of prices. We illuminate the causes of this singular event with the first analysis that tracks the full order book activity at millisecond granularity. We document previously overlooked market data anomalies and establish that these anomalies Granger-caused liquidity withdrawal. We offer a simulation model that formalizes the process by which large sell orders, combined with widespread liquidity withdrawal, can generate Flash Crash-like events in the absence of fundamental information arrival. ∗This work was supported by the Hellman Fellows Fund and the Rock Center for Corporate Governance at Stanford University. †Email: [email protected]. ‡Email: [email protected] §Email: [email protected]",
"title": ""
},
{
"docid": "fa292adbad54c22fce27afbc5467efad",
"text": "This paper presents the results of a case study on the impacts of implementing Enterprise Content Management Systems (ECMSs) in an organization. It investigates how these impacts are influenced by the functionalities of an ECMS and by the nature of the ECMS-supported processes. The results confirm that both factors do influence the impacts. Further, the results indicate that the implementation of an ECMS can change the nature of ECMS-supported processes. It is also demonstrated that the functionalities of an ECMS need to be aligned with the nature of the processes of the implementing organization. This finding confirms previous research from the Workflow Management domain and extends it to the ECM domain. Finally, the case study results show that implementing an ECMS to support rather ‘static’ processes can be expected to cause more and stronger impacts than the support of ‘flexible’ processes.",
"title": ""
},
{
"docid": "a1f4b4c6e98e6b5e8b7f939318a5e808",
"text": "A new hardware scheme for computing the transition and control matrix of a parallel cyclic redundancy checksum is proposed. This opens possibilities for parallel high-speed cyclic redundancy checksum circuits that reconfigure very rapidly to new polynomials. The area requirements are lower than those for a realization storing a precomputed matrix. An additional simplification arises as only the polynomial needs to be supplied. The derived equations allow the width of the data to be processed in parallel to be selected independently of the degree of the polynomial. The new design has been simulated and outperforms a recently proposed architecture significantly in speed, area, and energy efficiency.",
"title": ""
},
{
"docid": "3a834b5c9f5621c1801c7650b33f1e41",
"text": "Human-to-human infection, as a type of fatal public health threats, can rapidly spread, resulting in a large amount of labor and health cost for treatment, control and prevention. To slow down the spread of infection, social network is envisioned to provide detailed contact statistics to isolate susceptive people who has frequent contacts with infected patients. In this paper, we propose a novel human-to-human infection analysis approach by exploiting social network data and health data that are collected by social network and e-healthcare technologies. We enable the social cloud server and health cloud server to exchange social contact information of infected patients and user's health condition in a privacy-preserving way. Specifically, we propose a privacy-preserving data query method based on conditional oblivious transfer to guarantee that only the authorized entities can query users’ social data and the social cloud server cannot infer anything during the query. In addition, we propose a privacy-preserving classification-based infection analysis method that can be performed by untrusted cloud servers without accessing the users’ health data. The performance evaluation shows that the proposed approach achieves higher infection analysis accuracy with the acceptable computational overhead.",
"title": ""
}
] |
scidocsrr
|
9aec7682c9507086ab1022b9cec8ac9c
|
Pricing Digital Marketing : Information , Risk Sharing and Performance
|
[
{
"docid": "f7562e0540e65fdfdd5738d559b4aad1",
"text": "An important aspect of marketing practice is the targeting of consumer segments for differential promotional activity. The premise of this activity is that there exist distinct segments of homogeneous consumers who can be identified by readily available demographic information. The increased availability of individual consumer panel data open the possibility of direct targeting of individual households. The goal of this paper is to assess the information content of various information sets available for direct marketing purposes. Information on the consumer is obtained from the current and past purchase history as well as demographic characteristics. We consider the situation in which the marketer may have access to a reasonably long purchase history which includes both the products purchased and information on the causal environment. Short of this complete purchase history, we also consider more limited information sets which consist of only the current purchase occasion or only information on past product choice without causal variables. Proper evaluation of this information requires a flexible model of heterogeneity which can accommodate observable and unobservable heterogeneity as well as produce household level inferences for targeting purposes. We develop new econometric methods to imple0732-2399/96/1504/0321$01.25 Copyright C 1996, Institute for Operations Research and the Management Sciences ment a random coefficient choice model in which the heterogeneity distribution is related to observable demographics. We couple this approach to modeling heterogeneity with a target couponing problem in which coupons are customized to specific households on the basis of various information sets. The couponing problem allows us to place a monetary value on the information sets. Our results indicate there exists a tremendous potential for improving the profitability of direct marketing efforts by more fully utilizing household purchase histories. Even rather short purchase histories can produce a net gain in revenue from target couponing which is 2.5 times the gain from blanket couponing. The most popular current electronic couponing trigger strategy uses only one observation to customize the delivery of coupons. Surprisingly, even the information contained in observing one purchase occasion boasts net couponing revenue by 50% more than that which would be gained by the blanket strategy. This result, coupled with increased competitive pressures, will force targeted marketing strategies to become much more prevalent in the future than they are today. (Target Marketing; Coupons; Heterogeneity; Bayesian Hierarchical Models) MARKETING SCIENCE/Vol. 15, No. 4, 1996 pp. 321-340 THE VALUE OF PURCHASE HISTORY DATA IN TARGET MARKETING",
"title": ""
}
] |
[
{
"docid": "dc67945b32b2810a474acded3c144f68",
"text": "This paper presents an overview of the eld of Intelligent Products. As Intelligent Products have many facets, this paper is mainly focused on the concept behind Intelligent Products, the technical foundations, and the achievable practical goals of Intelligent Products. A novel classi cation of Intelligent Products is introduced, which distinguishes between three orthogonal dimensions. Furthermore, the technical foundations in the areas of automatic identi cation and embedded processing, distributed information storage and processing, and agent-based systems are discussed, as well as the achievable practical goals in the contexts of manufacturing, supply chains, asset management, and product life cycle management.",
"title": ""
},
{
"docid": "4d7c0222317fbd866113e1a244a342f3",
"text": "A simple method of \"tuning up\" a multiple-resonant-circuit filter quickly and exactly is demonstrated. The method may be summarized as follows: Very loosely couple a detector to the first resonator of the filter; then, proceeding in consecutive order, tune all odd-numbered resonators for maximum detector output, and all even-numbered resonators for minimum detector output (always making sure that the resonator immediately following the one to be resonated is completely detuned). Also considered is the correct adjustment of the two other types of constants in a filter. Filter constants can always be reduced to only three fundamental types: f0, dr(1/Qr), and Kr(r+1). This is true whether a lumped-element 100-kc filter or a distributed-element 5,000-mc unit is being considered. dr is adjusted by considering the rth resonator as a single-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the 3-db-down-points to the required value. Kr(r+1) is adjusted by considering the rth and (r+1)th adjacent resonators as a double-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the resulting response peaks to the required value. Finally, all the required values for K and Q are given for an n-resonant-circuit filter that will produce the response (Vp/V)2=1 +(Δf/Δf3db)2n.",
"title": ""
},
{
"docid": "7def66c81180a73282cd7e463dc4938c",
"text": "Drug abuse in Nigeria has been indicated to be on the rise in recent years. The use of hard drugs and misuse of prescription drugs for nonmedical purposes cuts across all strata, especially the youths. Tramadol (2[(Dimethylamin) methyl]-1-(3-methoxyphenyl)cyclohexanol) is known for its analgesic potentials. This potent opioid pain killer is misused by Nigerian youths, owing to its suspicion as sexual performance drug. This study therefore is aimed at determining the effect of tramadol on hormone levels its improved libido properties and possibly fertility. Twenty seven (27) European rabbits weighing 1.0 to 2.0 kg were used. Animals were divided into four major groups consisting of male and female control, and male and female tramadol treated groups. Treated groups were further divided into oral and intramuscular (IM) administered groups. Oral groups were administered 25 mg/kg b.w. of tramadol per day while the IM groups received 15 mg/kg b.w. per Original Research Article Osadolor and Omo-Erhabor; BJMMR, 14(8): 1-11, 2016; Article no.BJMMR.24620 2 day over a period of thirty days. Blood samples were collected at the end of the experiment for progesterone, testosterone, estrogen (E2), luteinizing hormone, follicle stimulating hormone (FSH), β-human chorionic gonadotropin and prolactin estimation. Tramadol treated groups were compared with control groups at the end of the study, as well as within group comparison was done. From the results, FSH was found to be significantly reduced (p<0.05) while LH increased significantly (p<0.05). A decrease was observed for testosterone (p<0.001), and estrogen, FSH, progesterone also decreased (p<0.05). Significant changes weren’t observed when IM groups were compared with oral groups. This study does not support an improvement of libido by tramadol, though its possible usefulness in the treatment of premature ejaculation may have been established, but its capabilities to induce male and female infertility is still in doubt.",
"title": ""
},
{
"docid": "95cd9d6572700e2b118c7cb0ffba549a",
"text": "Non-volatile main memory (NVRAM) has the potential to fundamentally change the persistency of software. Applications can make their state persistent by directly placing data structures on NVRAM instead of volatile DRAM. However, the persistent nature of NVRAM requires significant changes for memory allocators that are now faced with the additional tasks of data recovery and failure-atomicity. In this paper, we present nvm malloc, a general-purpose memory allocator concept for the NVRAM era as a basic building block for persistent applications. We introduce concepts for managing named allocations for simplified recovery and using volatile and non-volatile memory in combination to provide both high performance and failure-atomic allocations.",
"title": ""
},
{
"docid": "ed2c198cf34fe63d99a53dd5315bde53",
"text": "The article briefly elaborated the ship hull optimization research development of domestic and foreign based on CFD, proposed that realizing the key of ship hull optimization based on CFD is the hull form parametrization geometry modeling technology. On the foundation of the domestic and foreign hull form parametrization, we proposed the ship blending method, and clarified the principle, had developed the hull form parametrization blending module. Finally, we realized the integration of hull form parametrization blending module and CFD using the integrated optimization frame, has realized hull form automatic optimization design based on CFD, build the foundation for the research of ship multi-disciplinary optimization.",
"title": ""
},
{
"docid": "b25cfcd6ceefffe3039bb5a6a53e216c",
"text": "With the increasing applications in the domains of ubiquitous and context-aware computing, Internet of Things (IoT) are gaining importance. In IoTs, literally anything can be part of it, whether it is sensor nodes or dumb objects, so very diverse types of services can be produced. In this regard, resource management, service creation, service management, service discovery, data storage, and power management would require much better infrastructure and sophisticated mechanism. The amount of data IoTs are going to generate would not be possible for standalone power-constrained IoTs to handle. Cloud computing comes into play here. Integration of IoTs with cloud computing, termed as Cloud of Things (CoT) can help achieve the goals of envisioned IoT and future Internet. This IoT-Cloud computing integration is not straight-forward. It involves many challenges. One of those challenges is data trimming. Because unnecessary communication not only burdens the core network, but also the data center in the cloud. For this purpose, data can be preprocessed and trimmed before sending to the cloud. This can be done through a Smart Gateway, accompanied with a Smart Network or Fog Computing. In this paper, we have discussed this concept in detail and present the architecture of Smart Gateway with Fog Computing. We have tested this concept on the basis of Upload Delay, Synchronization Delay, Jitter, Bulk-data Upload Delay, and Bulk-data Synchronization Delay.",
"title": ""
},
{
"docid": "31865d8e75ee9ea0c9d8c575bbb3eb90",
"text": "Magicians use misdirection to prevent you from realizing the methods used to create a magical effect, thereby allowing you to experience an apparently impossible event. Magicians have acquired much knowledge about misdirection, and have suggested several taxonomies of misdirection. These describe many of the fundamental principles in misdirection, focusing on how misdirection is achieved by magicians. In this article we review the strengths and weaknesses of past taxonomies, and argue that a more natural way of making sense of misdirection is to focus on the perceptual and cognitive mechanisms involved. Our psychologically-based taxonomy has three basic categories, corresponding to the types of psychological mechanisms affected: perception, memory, and reasoning. Each of these categories is then divided into subcategories based on the mechanisms that control these effects. This new taxonomy can help organize magicians' knowledge of misdirection in a meaningful way, and facilitate the dialog between magicians and scientists.",
"title": ""
},
{
"docid": "d59c6a2dd4b6bf7229d71f3ae036328a",
"text": "Community search over large graphs is a fundamental problem in graph analysis. Recent studies propose to compute top-k influential communities, where each reported community not only is a cohesive subgraph but also has a high influence value. The existing approaches to the problem of top-k influential community search can be categorized as index-based algorithms and online search algorithms without indexes. The index-based algorithms, although being very efficient in conducting community searches, need to pre-compute a specialpurpose index and only work for one built-in vertex weight vector. In this paper, we investigate online search approaches and propose an instance-optimal algorithm LocalSearch whose time complexity is linearly proportional to the size of the smallest subgraph that a correct algorithm needs to access without indexes. In addition, we also propose techniques to make LocalSearch progressively compute and report the communities in decreasing influence value order such that k does not need to be specified. Moreover, we extend our framework to the general case of top-k influential community search regarding other cohesiveness measures. Extensive empirical studies on real graphs demonstrate that our algorithms outperform the existing online search algorithms by several orders of magnitude.",
"title": ""
},
{
"docid": "9e208e6beed62575a92f32031b7af8ad",
"text": "Recently, interests on cleaning robots workable in pipes (termed as in-pipe cleaning robot) are increasing because Garbage Automatic Collection Facilities (i.e, GACF) are widely being installed in Seoul metropolitan area of Korea. So far research on in-pipe robot has been focused on inspection rather than cleaning. In GACF, when garbage is moving, we have to remove the impurities which are stuck to the inner face of the pipe (diameter: 300mm or 400mm). Thus, in this paper, by using TRIZ (Inventive Theory of Problem Solving in Russian abbreviation), we will propose an in-pipe cleaning robot of GACF with the 6-link sliding mechanism which can be adjusted to fit into the inner face of pipe using pneumatic pressure(not spring). The proposed in-pipe cleaning robot for GACF can have forward/backward movement itself as well as rotation of brush in cleaning. The robot body should have the limited size suitable for the smaller pipe with diameter of 300mm. In addition, for the pipe with diameter of 400mm, the links of robot should stretch to fit into the diameter of the pipe by using the sliding mechanism. Based on the conceptual design using TRIZ, we will set up the initial design of the robot in collaboration with a field engineer of Robot Valley, Inc. in Korea. For the optimal design of in-pipe cleaning robot, the maximum impulsive force of collision between the robot and the inner face of pipe is simulated by using RecurDyn® when the link of sliding mechanism is stretched to fit into the 400mm diameter of the pipe. The stresses exerted on the 6 links of sliding mechanism by the maximum impulsive force will be simulated by using ANSYS® Workbench based on the Design Of Experiment(in short DOE). Finally the optimal dimensions including thicknesses of 4 links will be decided in order to have the best safety factor as 2 in this paper as well as having the minimum mass of 4 links. It will be verified that the optimal design of 4 links has the best safety factor close to 2 as well as having the minimum mass of 4 links, compared with the initial design performed by the expert of Robot Valley, Inc. In addition, the prototype of in-pipe cleaning robot will be stated with further research.",
"title": ""
},
{
"docid": "b86711e8a418bde07e16bcb9a394d92c",
"text": "This paper reviews and evaluates the evidence for the existence of distinct varieties of developmental dyslexia, analogous to those found in the acquired dyslexic population. Models of the normal adult reading process and of the development of reading in children are used to provide a framework for considering the issues. Data from a large-sample study of the reading patterns of developmental dyslexics are then reported. The lexical and sublexical reading skills of 56 developmental dyslexics were assessed through close comparison with the skills of 56 normally developing readers. The results indicate that there are at least two varieties of developmental dyslexia, the first of which is characterised by a specific difficulty using the lexical procedure, and the second by a difficulty using the sublexical procedure. These subtypes are apparently not rare, but are relatively prevalent in the developmental dyslexic population. The results of a second experiment, which suggest that neither of these reading patterns can be accounted for in terms of a general language disorder, are then reported.",
"title": ""
},
{
"docid": "93afb696fa395a7f7c2a4f3fc2ac690d",
"text": "We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that contextdependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.",
"title": ""
},
{
"docid": "ac168ff92c464cb90a9a4ca0eb5bfa5c",
"text": "Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.",
"title": ""
},
{
"docid": "ccf1f3cb6a9efda6c7d6814ec01d8329",
"text": "Twitter as a micro-blogging platform rose to instant fame mainly due to its minimalist features that allow seamless communication between users. As the conversations grew thick and faster, a placeholder feature called as Hashtags became important as it captured the themes behind the tweets. Prior studies have investigated the conversation dynamics, interplay with other media platforms and communication patterns between users for specific event-based hashtags such as the #Occupy movement. Commonplace hashtags which are used on a daily basis have been largely ignored due to their seemingly innocuous presence in tweets and also due to the lack of connection with real-world events. However, it can be postulated that utility of these hashtags is the main reason behind their continued usage. This study is aimed at understanding the rationale behind the usage of a particular type of commonplace hashtags:-location hashtags such as country and city name hashtags. Tweets with the hashtag #singapore were extracted for a week’s duration. Manual and automatic tweet classification was performed along with social network analysis, to identify the underlying themes. Seven themes were identified. Findings indicate that the hashtag is prominent in tweets about local events, local news, users’ current location and landmark related information sharing. Users who share content from social media sites such as Instagram make use of the hashtag in a more prominent way when compared to users who post textual content. News agencies, commercial bodies and celebrities make use of the hashtag more than common individuals. Overall, the results show the non-conversational nature of the hashtag. The findings are to be validated with other country names and crossvalidated with hashtag data from other social media platforms.",
"title": ""
},
{
"docid": "7b5331b0e6ad693fc97f5f3b543bf00c",
"text": "Relational learning deals with data that are characterized by relational structures. An important task is collective classification, which is to jointly classify networked objects. While it holds a great promise to produce a better accuracy than noncollective classifiers, collective classification is computational challenging and has not leveraged on the recent breakthroughs of deep learning. We present Column Network (CLN), a novel deep learning model for collective classification in multirelational domains. CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) longrange, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient, linear in the size of the network and the number of relations. We evaluate CLN on multiple real-world applications: (a) delay prediction in software projects, (b) PubMed Diabetes publication classification and (c) film genre classification. In all applications, CLN demonstrates a higher accuracy than state-of-the-art rivals.",
"title": ""
},
{
"docid": "418e29af01be9655c06df63918f41092",
"text": "A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this goal is approached by minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target a later desired task by meta-learning an unsupervised learning rule, which leads to representations useful for that task. Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm – an unsupervised weight update rule – that produces representations that perform well under this meta-objective. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to novel neural network architectures. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We show that the metalearned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.",
"title": ""
},
{
"docid": "011332e3d331d461e786fd2827b0434d",
"text": "In this manuscript we present various robust statistical methods popular in the social sciences, and show how to apply them in R using the WRS2 package available on CRAN. We elaborate on robust location measures, and present robust t-test and ANOVA versions for independent and dependent samples, including quantile ANOVA. Furthermore, we present on running interval smoothers as used in robust ANCOVA, strategies for comparing discrete distributions, robust correlation measures and tests, and robust mediator models.",
"title": ""
},
{
"docid": "c5fc804aa7f98a575a0e15b7c28650e8",
"text": "In the past few years, a great attention has been received by web documents as a new source of individual opinions and experience. This situation is producing increasing interest in methods for automatically extracting and analyzing individual opinion from web documents such as customer reviews, weblogs and comments on news. This increase was due to the easy accessibility of documents on the web, as well as the fact that all these were already machine-readable on gaining. At the same time, Machine Learning methods in Natural Language Processing (NLP) and Information Retrieval were considerably increased development of practical methods, making these widely available corpora. Recently, many researchers have focused on this area. They are trying to fetch opinion information and analyze it automatically with computers. This new research domain is usually called Opinion Mining and Sentiment Analysis. . Until now, researchers have developed several techniques to the solution of the problem. This paper try to cover some techniques and approaches that be used in this area.",
"title": ""
},
{
"docid": "789de6123795ad8950c21b0ee8df7315",
"text": "This paper introduces new optimality-preserving operators on Q-functions. We first describe an operator for tabular representations, the consistent Bellman operator, which incorporates a notion of local policy consistency. We show that this local consistency leads to an increase in the action gap at each state; increasing this gap, we argue, mitigates the undesirable effects of approximation and estimation errors on the induced greedy policies. This operator can also be applied to discretized continuous space and time problems, and we provide empirical results evidencing superior performance in this context. Extending the idea of a locally consistent operator, we then derive sufficient conditions for an operator to preserve optimality, leading to a family of operators which includes our consistent Bellman operator. As corollaries we provide a proof of optimality for Baird’s advantage learning algorithm and derive other gap-increasing operators with interesting properties. We conclude with an empirical study on 60 Atari 2600 games illustrating the strong potential of these new operators. Value-based reinforcement learning is an attractive solution to planning problems in environments with unknown, unstructured dynamics. In its canonical form, value-based reinforcement learning produces successive refinements of an initial value function through repeated application of a convergent operator. In particular, value iteration (Bellman 1957) directly computes the value function through the iterated evaluation of Bellman’s equation, either exactly or from samples (e.g. Q-Learning, Watkins 1989). In its simplest form, value iteration begins with an initial value function V0 and successively computes Vk+1 := T Vk, where T is the Bellman operator. When the environment dynamics are unknown, Vk is typically replaced by Qk, the state-action value function, and T is approximated by an empirical Bellman operator. The fixed point of the Bellman operator, Q∗, is the optimal state-action value function or optimal Q-function, from which an optimal policy π∗ can be recovered. In this paper we argue that the optimal Q-function is inconsistent, in the sense that for any action a which is subop∗Now at Carnegie Mellon University. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. timal in state x, Bellman’s equation for Q∗(x, a) describes the value of a nonstationary policy: upon returning to x, this policy selects π∗(x) rather than a. While preserving global consistency appears impractical, we propose a simple modification to the Bellman operator which provides us a with a first-order solution to the inconsistency problem. Accordingly, we call our new operator the consistent Bellman operator. We show that the consistent Bellman operator generally devalues suboptimal actions but preserves the set of optimal policies. As a result, the action gap – the value difference between optimal and second best actions – increases. This increasing of the action gap is advantageous in the presence of approximation or estimation error, and may be crucial for systems operating at a fine time scale such as video games (Togelius et al. 2009; Bellemare et al. 2013), real-time markets (Jiang and Powell 2015), and robotic platforms (Riedmiller et al. 2009; Hoburg and Tedrake 2009; Deisenroth and Rasmussen 2011; Sutton et al. 2011). In fact, the idea of devaluating suboptimal actions underpins Baird’s advantage learning (Baird 1999), designed for continuous time control, and occurs naturally when considering the discretized solution of continuous time and space MDPs (e.g. Munos and Moore 1998; 2002), whose limit is the HamiltonJacobi-Bellman equation (Kushner and Dupuis 2001). Our empirical results on the bicycle domain (Randlov and Alstrom 1998) show a marked increase in performance from using the consistent Bellman operator. In the second half of this paper we derive novel sufficient conditions for an operator to preserve optimality. The relative weakness of these new conditions reveal that it is possible to deviate significantly from the Bellman operator without sacrificing optimality: an optimality-preserving operator needs not be contractive, nor even guarantee convergence of the Q-values for suboptimal actions. While numerous alternatives to the Bellman operator have been put forward (e.g. recently Azar et al. 2011; Bertsekas and Yu 2012), we believe our work to be the first to propose such a major departure from the canonical fixed-point condition required from an optimality-preserving operator. As proof of the richness of this new operator family we describe a few practical instantiations with unique properties. We use our operators to obtain state-of-the-art empirical results on the Arcade Learning Environment (Bellemare et al. 2013). We consider the Deep Q-Network (DQN) architecture of Mnih et al. (2015), replacing only its learning rule with one of our operators. Remarkably, this one-line change produces agents that significantly outperform the original DQN. Our work, we believe, demonstrates the potential impact of rethinking the core components of value-based reinforcement learning.",
"title": ""
}
] |
scidocsrr
|
aee1109ece9695cd11b1accce21368ed
|
A Re-Examination of Text Categorization Methods
|
[
{
"docid": "1ec9b98f0f7509088e7af987af2f51a2",
"text": "In this paper, we describe an automated learning approach to text categorization based on perception learning and a new feature selection metric, called correlation coefficient. Our approach has been teated on the standard Reuters text categorization collection. Empirical results indicate that our approach outperforms the best published results on this % uters collection. In particular, our new feature selection method yields comiderable improvement. We also investigate the usability of our automated hxu-n~ approach by actually developing a system that categorizes texts into a treeof categories. We compare tbe accuracy of our learning approach to a rrddmsed, expert system ap preach that uses a text categorization shell built by Cams gie Group. Although our automated learning approach still gives a lower accuracy, by appropriately inmrporating a set of manually chosen worda to use as f~ures, the combined, semi-automated approach yields accuracy close to the * baaed approach.",
"title": ""
}
] |
[
{
"docid": "66f684ba92fe735fecfbfb53571bad5f",
"text": "Some empirical learning tasks are concerned with predicting values rather than the more familiar categories. This paper describes a new system, m5, that constructs tree-based piecewise linear models. Four case studies are presented in which m5 is compared to other methods.",
"title": ""
},
{
"docid": "e6bccfd2a665687bf7bc050e788b27f1",
"text": "Continuous dimensional models of human affect, such as those based on valence and arousal, have been shown to be more accurate in describing a broad range of spontaneous, everyday emotions than the more traditional models of discrete stereotypical emotion categories (e.g. happiness, surprise). However, most prior work on estimating valence and arousal considered only laboratory settings and acted data. It is unclear whether the findings of these studies also hold when the methodologies proposed in these works are tested on data collected in-the-wild. In this paper we investigate this. We propose a new dataset of highly accurate per-frame annotations of valence and arousal for 600 challenging video clips extracted from feature films (also used in part for the AFEW dataset). For each video clip, we further provide per-frame annotations of 68 facial landmarks. We subsequently evaluate a number of common baseline and state-of-the-art methods on both a commonly used laboratory recording dataset (Semaine database) and the newly proposed recording set (AFEW-VA). Our results show that geometric features perform well independently of the settings. However, as expected, methods that perform well on constrained data do not necessarily generalise to uncontrolled data and vice-versa. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e6f4d3b032f6e92f1c142f3e229e9627",
"text": "An epitaxy technique, confined lateral selective epitaxial growth (CLSEG), which produces wide, thin slabs of single-crystal silicon over insulator, using only conventional processing, is discussed. As-grown films of CLSEG 0.9 mu m thick, 8.0 mu m wide, and 500 mu m long were produced at 1000 degrees C at reduced pressure. Junction diodes fabricated in CLSEG material show ideality factors of 1.05 with reverse leakage currents comparable to those of diodes built in SEG homoepitaxial material. Metal-gate p-channel MOSFETs in CLSEG with channel dopings of 2*10/sup 16/ cm/sup -3/ exhibit average mobilities of 283 cm/sup 2//V-s and subthreshold slopes of 223 mV/decade.<<ETX>>",
"title": ""
},
{
"docid": "e751fdbc980c36b95c81f0f865bb5033",
"text": "In order to match shoppers with desired products and provide personalized promotions, whether in online or offline shopping worlds, it is critical to model both consumer preferences and price sensitivities simultaneously. Personalized preferences have been thoroughly studied in the field of recommender systems, though price (and price sensitivity) has received relatively little attention. At the same time, price sensitivity has been richly explored in the area of economics, though typically not in the context of developing scalable, working systems to generate recommendations. In this study, we seek to bridge the gap between large-scale recommender systems and established consumer theories from economics, and propose a nested feature-based matrix factorization framework to model both preferences and price sensitivities. Quantitative and qualitative results indicate the proposed personalized, interpretable and scalable framework is capable of providing satisfying recommendations (on two datasets of grocery transactions) and can be applied to obtain economic insights into consumer behavior.",
"title": ""
},
{
"docid": "6d096dc86d240370bef7cc4e4cdd12e5",
"text": "Modern software systems are subject to uncertainties, such as dynamics in the availability of resources or changes of system goals. Self-adaptation enables a system to reason about runtime models to adapt itself and realises its goals under uncertainties. Our focus is on providing guarantees for adaption goals. A prominent approach to provide such guarantees is automated verification of a stochastic model that encodes up-to-date knowledge of the system and relevant qualities. The verification results allow selecting an adaption option that satisfies the goals. There are two issues with this state of the art approach: i) changing goals at runtime (a challenging type of uncertainty) is difficult, and ii) exhaustive verification suffers from the state space explosion problem. In this paper, we propose a novel modular approach for decision making in self-adaptive systems that combines distinct models for each relevant quality with runtime simulation of the models. Distinct models support on the fly changes of goals. Simulation enables efficient decision making to select an adaptation option that satisfies the system goals. The tradeoff is that simulation results can only provide guarantees with a certain level of accuracy. We demonstrate the benefits and tradeoffs of the approach for a service-based telecare system.",
"title": ""
},
{
"docid": "c155ce2743c59f4ce49fdffe74d94443",
"text": "The theta oscillation (5-10Hz) is a prominent behavior-specific brain rhythm. This review summarizes studies showing the multifaceted role of theta rhythm in cognitive functions, including spatial coding, time coding and memory, exploratory locomotion and anxiety-related behaviors. We describe how activity of hippocampal theta rhythm generators - medial septum, nucleus incertus and entorhinal cortex, links theta with specific behaviors. We review evidence for functions of the theta-rhythmic signaling to subcortical targets, including lateral septum. Further, we describe functional associations of theta oscillation properties - phase, frequency and amplitude - with memory, locomotion and anxiety, and outline how manipulations of these features, using optogenetics or pharmacology, affect associative and innate behaviors. We discuss work linking cognition to the slope of the theta frequency to running speed regression, and emotion-sensitivity (anxiolysis) to its y-intercept. Finally, we describe parallel emergence of theta oscillations, theta-mediated neuronal activity and behaviors during development. This review highlights a complex interplay of neuronal circuits and synchronization features, which enables an adaptive regulation of multiple behaviors by theta-rhythmic signaling.",
"title": ""
},
{
"docid": "7032e1ea76108b005d5303152c1eb365",
"text": "We investigate the effect of social media content on customer engagement using a large-scale field study on Facebook. We content-code more than 100,000 unique messages across 800 companies engaging with users on Facebook using a combination of Amazon Mechanical Turk and state-of-the-art Natural Language Processing algorithms. We use this large-scale database of content attributes to test the effect of social media marketing content on subsequent user engagement − defined as Likes and comments − with the messages. We develop methods to account for potential selection biases that arise from Facebook’s filtering algorithm, EdgeRank, that assigns messages non-randomly to users. We find that inclusion of persuasive content − like emotional and philanthropic content − increases engagement with a message. We find that informative content − like mentions of prices, availability, and product features − reduce engagement when included in messages in isolation, but increase engagement when provided in combination with persuasive attributes. Persuasive content thus seems to be the key to effective engagement. Our results inform content design strategies in social media, and the methodology we develop to content-code large-scale textual data provides a framework for future studies on unstructured natural language data such as advertising content or product reviews.",
"title": ""
},
{
"docid": "76ebe7821ae75b50116d6ac3f156e571",
"text": "Since the financial crisis in 2008 organisations have been forced to rethink their risk management. Therefore entities have changed from silo-based Traditional Risk Management to the overarching framework Enterprise Risk Management. Yet Enterprise Risk Management is a young model and it has to contend with various challenges. At the moment there are just a few research papers but they claim that this approach is reasonable. The two frameworks COSO and GRC try to support Enterprise Risk Management. Research does not provide studies about their efficiency. The challenges of Enterprise Risk Management are the composition of the system, suitable metrics, the human factor and the complex environment.",
"title": ""
},
{
"docid": "217b7d425d280a1ebb55862cc9bfd848",
"text": "The present study is focused on a review of the current state of investigating music-evoked emotions experimentally, theoretically and with respect to their therapeutic potentials. After a concise historical overview and a schematic of the hearing mechanisms, experimental studies on music listeners and on music performers are discussed, starting with the presentation of characteristic musical stimuli and the basic features of tomographic imaging of emotional activation in the brain, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), which offer high spatial resolution in the millimeter range. The progress in correlating activation imaging in the brain to the psychological understanding of music-evoked emotion is demonstrated and some prospects for future research are outlined. Research in psychoneuroendocrinology and molecular markers is reviewed in the context of music-evoked emotions and the results indicate that the research in this area should be intensified. An assessment of studies involving measuring techniques with high temporal resolution down to the 10 ms range, as, e.g., electroencephalography (EEG), event-related brain potentials (ERP), magnetoencephalography (MEG), skin conductance response (SCR), finger temperature, and goose bump development (piloerection) can yield information on the dynamics and kinetics of emotion. Genetic investigations reviewed suggest the heredity transmission of a predilection for music. Theoretical approaches to musical emotion are directed to a unified model for experimental neurological evidence and aesthetic judgment. Finally, the reports on musical therapy are briefly outlined. The study concludes with an outlook on emerging technologies and future research fields.",
"title": ""
},
{
"docid": "2e12a5f308472f3f4d19d4399dc85546",
"text": "This paper presents a taxonomy of replay attacks on cryptographic protocols in terms of message origin and destination. The taxonomy is independent of any method used to analyze or prevent such attacks. It is also complete in the sense that any replay attack is composed entirely of elements classi ed by the taxonomy. The classi cation of attacks is illustrated using both new and previously known attacks on protocols. The taxonomy is also used to discuss the appropriateness of particular countermeasures and protocol analysis methods to particular kinds of replays.",
"title": ""
},
{
"docid": "6920f09ffecd83bee8cc813b7699db0d",
"text": "To examine the responsiveness of Functional Assessment of Cancer Therapy-Prostate (FACT-P) and Short Form-12 Health Survey version 2 (SF-12 v2) in prostate cancer patients because there is a lack of evidence to support their responsiveness in this patient population. One hundred sixty-eight subjects with prostate cancer were surveyed at baseline and at 6 months using the SF-12 v2 and FACT-P version 4. Internal responsiveness was assessed using paired t test and generalized estimating equation. External responsiveness was evaluated using receiver operating characteristic curve analysis. The internal responsiveness of the FACT-P and SF-12 v2 to detect positive change was satisfactory. The FACT-P and SF-12 v2 could not detect negative change. The FACT-P and the SF-12 v2 performed the best in distinguishing between improved general health and worsened general health. The FACT-P performed better in distinguishing between unchanged general health and worsened general health. The SF-12 v2 performed better in distinguishing between unchanged general health and improved general health. Positive change detected by these measures should be interpreted with caution as they might be too responsive to detect “noise,” which is not clinically significant. The ability of the FACT-P and the SF-12 v2 to detect negative change was disappointing. The internal and external responsiveness of the social well-being of the FACT-P cannot be supported, suggesting that it is not suitable to longitudinally monitor the social component of HRQOL in prostate cancer patients. The study suggested that generic and disease-specific measures should be used together to complement each other.",
"title": ""
},
{
"docid": "c5b9053b1b22d56dd827009ef529004d",
"text": "An integrated receiver with high sensitivity and low walk error for a military purpose pulsed time-of-flight (TOF) LADAR system is proposed. The proposed receiver adopts a dual-gain capacitive-feedback TIA (C-TIA) instead of widely used resistive-feedback TIA (R-TIA) to increase the sensitivity. In addition, a new walk-error improvement circuit based on a constant-delay detection method is proposed. Implemented in 0.35 μm CMOS technology, the receiver achieves an input-referred noise current of 1.36 pA/√Hz with bandwidth of 140 MHz and minimum detectable signal (MDS) of 10 nW with a 5 ns pulse at SNR=3.3, maximum walk-error of 2.8 ns, and a dynamic range of 1:12,000 over the operating temperature range of -40 °C to +85 °C.",
"title": ""
},
{
"docid": "eebca83626e8568e8b92019541466873",
"text": "There is a need for new spectrum access protocols that are opportunistic, flexible and efficient, yet fair. Game theory provides a framework for analyzing spectrum access, a problem that involves complex distributed decisions by independent spectrum users. We develop a cooperative game theory model to analyze a scenario where nodes in a multi-hop wireless network need to agree on a fair allocation of spectrum. We show that in high interference environments, the utility space of the game is non-convex, which may make some optimal allocations unachievable with pure strategies. However, we show that as the number of channels available increases, the utility space becomes close to convex and thus optimal allocations become achievable with pure strategies. We propose the use of the Nash Bargaining Solution and show that it achieves a good compromise between fairness and efficiency, using a small number of channels. Finally, we propose a distributed algorithm for spectrum sharing and show that it achieves allocations reasonably close to the Nash Bargaining Solution.",
"title": ""
},
{
"docid": "b622b927d718d8645858ecfc1809ed4d",
"text": "This paper presents our contribution to the SemEval 2016 task 5: Aspect-Based Sentiment Analysis. We have addressed Subtask 1 for the restaurant domain, in English and French, which implies opinion target expression detection, aspect category and polarity classification. We describe the different components of the system, based on composite models combining sophisticated linguistic features with Machine Learning algorithms, and report the results obtained for both languages.",
"title": ""
},
{
"docid": "ac2d4f4e6c73c5ab1734bfeae3a7c30a",
"text": "While neural, encoder-decoder models have had significant empirical success in text generation, there remain several unaddressed problems with this style of generation. Encoderdecoder models are largely (a) uninterpretable, and (b) difficult to control in terms of their phrasing or content. This work proposes a neural generation system using a hidden semimarkov model (HSMM) decoder, which learns latent, discrete templates jointly with learning to generate. We show that this model learns useful templates, and that these templates make generation both more interpretable and controllable. Furthermore, we show that this approach scales to real data sets and achieves strong performance nearing that of encoderdecoder text generation models.",
"title": ""
},
{
"docid": "4ecc49bb99ade138783899b6f9b47f16",
"text": "This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We nd that in this task model-based approaches support reinforcement learning from smaller amounts of training data and eecient handling of changing goals.",
"title": ""
},
{
"docid": "9af09d6ba8b1628284f3169316993ee0",
"text": "This paper proposed a retinal image segmentation method based on conditional Generative Adversarial Network (cGAN) to segment optic disc. The proposed model consists of two successive networks: generator and discriminator. The generator learns to map information from the observing input (i.e., retinal fundus color image), to the output (i.e., binary mask). Then, the discriminator learns as a loss function to train this mapping by comparing the ground-truth and the predicted output with observing the input image as a condition. Experiments were performed on two publicly available dataset; DRISHTI GS1 and RIM-ONE. The proposed model outperformed state-of-the-art-methods by achieving around 0.96 and 0.98 of Jaccard and Dice coefficients, respectively. Moreover, an image segmentation is performed in less than a second on recent GPU.",
"title": ""
},
{
"docid": "32aaaa1bb43a5631cebb4dd85ef54105",
"text": "In this work sentiment analysis of annual budget for Financial year 2016–17 is done. Text mining is used to extract text data from the budget document and to compute the word association of significant words and their correlation in computed with the associated words. Word frequency and the corresponding word cloud is plotted. The analysis is done in R software. The corresponding sentiment score is computed and analyzed. This analysis is of significant importance keeping in mind the sentiment reflected about the budget in the official budget document.",
"title": ""
},
{
"docid": "744637cd8d2e4035f47c311b936cedc6",
"text": "The African savannah elephant (Loxodonta africana) is one of the critically endangered animals. Conservation of genetic and cellular resources is important for the promotion of wild life-related research. Although primary cultured cells are a useful model for the physiology and genomics of the wild-type animals, their distribution is restricted due to the limited number of cell divisions allowed in them. Here, we tried to immortalize a primary cell line of L. africana with by overexpressing human mutant form of cyclin-dependent kinase 4 (CDK4R24C), cyclin D, and telomerase (TERT). It has been shown before that the combination of human CDK4R24C, cyclin D, and TERT induces the efficient cellular immortalization of cells derived from humans, bovine, swine, and monkeys. Interestingly, although the combination of these three genes extended the cellular proliferation of the L. africana-derived cells, they did not induce cellular immortalization. This study suggest that control of cellular senescence in L. africana-derived cells would be different molecular mechanisms compared to those governing human, bovine, swine, and monkey cells.",
"title": ""
}
] |
scidocsrr
|
7341c82e76f53843640f1eadff1aaf5d
|
A review of inverse reinforcement learning theory and recent advances
|
[
{
"docid": "cae4703a50910c7718284c6f8230a4bc",
"text": "Autonomous helicopter flight is widely regarded to be a highly challenging control problem. Despite this fact, human experts can reliably fly helicopters through a wide range of maneuvers, including aerobatic maneuvers at the edge of the helicopter’s capabilities. We present apprenticeship learning algorithms, which leverage expert demonstrations to efficiently learn good controllers for tasks being demonstrated by an expert. These apprenticeship learning algorithms have enabled us to significantly extend the state of the art in autonomous helicopter aerobatics. Our experimental results include the first autonomous execution of a wide range of maneuvers, including but not limited to in-place flips, in-place rolls, loops and hurricanes, and even auto-rotation landings, chaos and tic-tocs, which only exceptional human pilots can perform. Our results also include complete airshows, which require autonomous transitions between many of these maneuvers. Our controllers perform as well as, and often even better than, our expert pilot.",
"title": ""
},
{
"docid": "fb4837a619a6b9e49ca2de944ec2314e",
"text": "Inverse reinforcement learning addresses the general problem of recovering a reward function from samples of a policy provided by an expert/demonstrator. In this paper, we introduce active learning for inverse reinforcement learning. We propose an algorithm that allows the agent to query the demonstrator for samples at specific states, instead of relying only on samples provided at “arbitrary” states. The purpose of our algorithm is to estimate the reward function with similar accuracy as other methods from the literature while reducing the amount of policy samples required from the expert. We also discuss the use of our algorithm in higher dimensional problems, using both Monte Carlo and gradient methods. We present illustrative results of our algorithm in several simulated examples of different complexities.",
"title": ""
},
{
"docid": "a4473c2cc7da3fb5ee52b60cee24b9b9",
"text": "The ALVINN (Autonomous h d Vehide In a N d Network) projea addresses the problem of training ani&ial naxal naarork in real time to perform difficult perapaon tasks. A L W is a back-propagation network dmpd to dnve the CMU Navlab. a modided Chevy van. 'Ibis ptpa describes the training techniques which allow ALVIN\" to luun in under 5 minutes to autonomously conm>l the Navlab by wardung ahuamr, dziver's rmaions. Usingthese technrques A L W has b&n trained to drive in a variety of Cirarmstanccs including single-lane paved and unprved roads. and multi-lane lined and rmlinecd roads, at speeds of up IO 20 miles per hour",
"title": ""
}
] |
[
{
"docid": "d38e5fa4adadc3e979c5de812599c78a",
"text": "The convergence properties of a nearest neighbor rule that uses an editing procedure to reduce the number of preclassified samples and to improve the performance of the rule are developed. Editing of the preclassified samples using the three-nearest neighbor rule followed by classification using the single-nearest neighbor rule with the remaining preclassified samples appears to produce a decision procedure whose risk approaches the Bayes' risk quite closely in many problems with only a few preclassified samples. The asymptotic risk of the nearest neighbor rules and the nearest neighbor rules using edited preclassified samples is calculated for several problems.",
"title": ""
},
{
"docid": "26295dded01b06c8b11349723fea81dd",
"text": "The increasing popularity of parametric design tools goes hand in hand with the use of building performance simulation (BPS) tools from the early design phase. However, current methods require a significant computational time and a high number of parameters as input, as they are based on traditional BPS tools conceived for detailed building design phase. Their application to the urban scale is hence difficult. As an alternative to the existing approaches, we developed an interface to CitySim, a validated building simulation tool adapted to urban scale assessments, bundled as a plug-in for Grasshopper, a popular parametric design platform. On the one hand, CitySim allows faster simulations and requires fewer parameters than traditional BPS tools, as it is based on algorithms providing a good trade-off between the simulations requirements and their accuracy at the urban scale; on the other hand, Grasshopper allows the easy manipulation of building masses and energy simulation parameters through semi-automated parametric",
"title": ""
},
{
"docid": "7dc5e63ddbb8ec509101299924093c8b",
"text": "The task of aspect and opinion terms co-extraction aims to explicitly extract aspect terms describing features of an entity and opinion terms expressing emotions from user-generated texts. To achieve this task, one effective approach is to exploit relations between aspect terms and opinion terms by parsing syntactic structure for each sentence. However, this approach requires expensive effort for parsing and highly depends on the quality of the parsing results. In this paper, we offer a novel deep learning model, named coupled multi-layer attentions. The proposed model provides an end-to-end solution and does not require any parsers or other linguistic resources for preprocessing. Specifically, the proposed model is a multilayer attention network, where each layer consists of a couple of attentions with tensor operators. One attention is for extracting aspect terms, while the other is for extracting opinion terms. They are learned interactively to dually propagate information between aspect terms and opinion terms. Through multiple layers, the model can further exploit indirect relations between terms for more precise information extraction. Experimental results on three benchmark datasets in SemEval Challenge 2014 and 2015 show that our model achieves stateof-the-art performances compared with several baselines.",
"title": ""
},
{
"docid": "2c3ab7e0f49dc4575c77a712e8184ce0",
"text": "The cubature Kalman filter (CKF), which is based on the third degree spherical–radial cubature rule, is numericallymore stable than the unscented Kalman filter (UKF) but less accurate than theGauss–Hermite quadrature filter (GHQF). To improve the performance of the CKF, a new class of CKFs with arbitrary degrees of accuracy in computing the spherical and radial integrals is proposed. The third-degree CKF is a special case of the class. The high-degree CKFs of the class can achieve the accuracy and stability performances close to those of the GHQF but at lower computational cost. A numerical integration problem and a target tracking problem are utilized to demonstrate the necessity of using the high-degree cubature rules to improve the performance. The target tracking simulation shows that the fifth-degree CKF can achieve higher accuracy than the extended Kalman filter, the UKF, the third-degree CKF, and the particle filter, and is computationally much more efficient than the GHQF. © 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8b7caff264c4258f0ae91f5927fde978",
"text": "Table detection is an important task in the field of document analysis. It has been extensively studied since a couple of decades. Various kinds of document mediums are involved, from scanned images to web pages, from plain texts to PDF files. Numerous algorithms published bring up a challenging issue: how to evaluate algorithms in different context. Currently, most work on table detection conducts experiments on their in-house dataset. Even the few sources of online datasets are targeted at image documents only. Moreover, Precision and recall measurement are usual practice in order to account performance based on human evaluation. In this paper, we provide a dataset that is representative, large and most importantly, publicly available. The compatible format of the ground truth makes evaluation independent of document medium. We also propose a set of new measures, implement them, and open the source code. Finally, three existing table detection algorithms are evaluated to demonstrate the reliability of the dataset and metrics.",
"title": ""
},
{
"docid": "6210d2da6100adbd4db89a983d00419f",
"text": "Many binary code encoding schemes based on hashing have been actively studied recently, since they can provide efficient similarity search, especially nearest neighbor search, and compact data representations suitable for handling large scale image databases in many computer vision problems. Existing hashing techniques encode high-dimensional data points by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. Furthermore, we propose a new binary code distance function, spherical Hamming distance, that is tailored to our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve balanced partitioning of data points for each hash function and independence between hashing functions. Our extensive experiments show that our spherical hashing technique significantly outperforms six state-of-the-art hashing techniques based on hyperplanes across various image benchmarks of sizes ranging from one to 75 million of GIST descriptors. The performance gains are consistent and large, up to 100% improvements. The excellent results confirm the unique merits of the proposed idea in using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.",
"title": ""
},
{
"docid": "d81c866f09dfbead73c8d55986b231ef",
"text": "Phenazepam is a benzodiazepine derivative that has been in clinical use in Russia since 1978 and is not available by prescription in the United States; however, it is attainable through various internet websites, sold either as tablets or as a reference grade crystalline powder. Presented here is the case of a 42-year old Caucasian male who died as the result of combined phenazepam, morphine, codeine, and thebaine intoxication. A vial of white powder labeled \"Phenazepam, Purity 99%, CAS No. 51753-57-2, Research Sample\", a short straw, and several poppy seed pods were found on the scene. Investigation revealed that the decedent had a history of ordering medications over the internet and that he had consumed poppy seed tea prior to his death. Phenazepam, morphine, codeine, and thebaine were present in the blood at 386, 116, 85, and 72 ng/mL, respectively.",
"title": ""
},
{
"docid": "49575576bc5a0b949c81b0275cbc5f41",
"text": "From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7–3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.",
"title": ""
},
{
"docid": "e577c2827822bfe2f1fc177efeeef732",
"text": "This paper presents a control problem involving an experimental propeller setup that is called the twin rotor multi-input multi-output system (TRMS). The control objective is to make the beam of the TRMS move quickly and accurately to the desired attitudes, both the pitch angle and the azimuth angle in the condition of decoupling between two axes. It is difficult to design a suitable controller because of the influence between the two axes and nonlinear movement. For easy demonstration in the vertical and horizontal separately, the TRMS is decoupled by the main rotor and tail rotor. An intelligent control scheme which utilizes a hybrid PID controller is implemented to this problem. Simulation results show that the new approach to the TRMS control problem can improve the tracking performance and reduce control energy.",
"title": ""
},
{
"docid": "d6d07f50778ba3d99f00938b69fe0081",
"text": "The use of metal casing is attractive to achieve robustness of modern slim tablet devices. The metal casing includes the metal back cover and the metal frame around the edges thereof. For such metal-casing tablet devices, the frame antenna that uses a part of the metal frame as an antenna's radiator is promising to achieve wide bandwidths for mobile communications. In this paper, the frame antenna based on the simple half-loop antenna structure to cover the long-term evolution 746-960 and 1710-2690 MHz bands is presented. The half-loop structure for the frame antenna is easy for manufacturing and increases the robustness of the metal casing. The dual-wideband operation of the half-loop frame antenna is obtained by using an elevated feed network supported by a thin feed substrate. The measured antenna efficiencies are, respectively, 45%-69% and 60%-83% in the low and high bands. By selecting different feed circuits, the antenna's low band can also be shifted from 746-960 MHz to lower frequencies such as 698-840 MHz, with the antenna's high-band coverage very slightly varied. The working principle of the antenna with the elevated feed network is discussed. The antenna is also fabricated and tested, and experimental results are presented.",
"title": ""
},
{
"docid": "57a333a88a5c1f076fd096ec4cde4cba",
"text": "2.1 HISTORY OF BIOTECHNOLOGY....................................................................................................6 2.2 MODERN BIOTECHNOLOGY ........................................................................................................6 2.3 THE GM DEBATE........................................................................................................................7 2.4 APPLYING THE PRECAUTIONARY APPROACH TO GMOS .............................................................8 2.5 RISK ASSESSMENT ISSUES ..........................................................................................................9 2.6 LEGAL CONTEXT ......................................................................................................................10 T",
"title": ""
},
{
"docid": "4519e039416fe4548e08a15b30b8a14f",
"text": "The R-tree, one of the most popular access methods for rectangles, is based on the heuristic optimization of the area of the enclosing rectangle in each inner node. By running numerous experiments in a standardized testbed under highly varying data, queries and operations, we were able to design the R*-tree which incorporates a combined optimization of area, margin and overlap of each enclosing rectangle in the directory. Using our standardized testbed in an exhaustive performance comparison, it turned out that the R*-tree clearly outperforms the existing R-tree variants. Guttman's linear and quadratic R-tree and Greene's variant of the R-tree. This superiority of the R*-tree holds for different types of queries and operations, such as map overlay, for both rectangles and multidimensional points in all experiments. From a practical point of view the R*-tree is very attractive because of the following two reasons 1 it efficiently supports point and spatial data at the same time and 2 its implementation cost is only slightly higher than that of other R-trees.",
"title": ""
},
{
"docid": "1bc91b4547481a81c2963dd117a96370",
"text": "Breast cancer is one of the main causes of women mortality worldwide. Ultrasonography (USG) is other modalities than mammography that capable to support radiologists in diagnosing breast cancer. However, the diagnosis may come with different interpretation depending on the radiologists experience. Therefore, Computer-Aided Diagnosis (CAD) is developed as a tool for radiologist's second opinion. CAD is built based on digital image processing of ultrasound (US) images which consists of several stages. Lesion segmentation is an important step in CAD system because it contains many important features for classification process related to lesion characteristics. This study provides a performance analysis and comparison of image segmentation for breast USG images. In this paper, several methods are presented such as a comprehensive comparison of adaptive thresholding, fuzzy C-Means (FCM), Fast Global Minimization for Active Contour (FGMAC) and Active Contours Without Edges (ACWE). The performance of these methods are evaluated with evaluation metrics Dice coefficient, Jaccard coefficient, FPR, FNR, Hausdorff distance, PSNR and MSSD parameters. Morphological operation is able to increase the performance of each segmentation methods. Overall, ACWE with morphological operation gives the best performance compare to the other methods with the similarity level of more than 90%.",
"title": ""
},
{
"docid": "77f5c568ed065e4f23165575c0a05da6",
"text": "Localization is the problem of determining the position of a mobile robot from sensor data. Most existing localization approaches are passive, i.e., they do not exploit the opportunity to control the robot's effectors during localization. This paper proposes an active localization approach. The approach provides rational criteria for (1) setting the robot's motion direction (exploration), and (2) determining the pointing direction of the sensors so as to most efficiently localize the robot. Furthermore, it is able to deal with noisy sensors and approximative world models. The appropriateness of our approach is demonstrated empirically using a mobile robot in a structured office environment.",
"title": ""
},
{
"docid": "02d11f4663277bb55a289d03403b5eb2",
"text": "Financial markets play an important role on the economical and social organization of modern society. In these kinds of markets, information is an invaluable asset. However, with the modernization of the financial transactions and the information systems, the large amount of information available for a trader can make prohibitive the analysis of a financial asset. In the last decades, many researchers have attempted to develop computational intelligent methods and algorithms to support the decision-making in different financial market segments. In the literature, there is a huge number of scientific papers that investigate the use of computational intelligence techniques to solve financial market problems. However, only few studies have focused on review the literature of this topic. Most of the existing review articles have a limited scope, either by focusing on a specific financial market application or by focusing on a family of machine learning algorithms. This paper presents a review of the application of several computational intelligent methods in several financial applications. This paper gives an overview of the most important primary studies published from 2009 to 2015, which cover techniques for preprocessing and clustering of financial data, for forecasting future market movements, for mining financial text information, among others. The main contributions of this paper are: (i) a comprehensive review of the literature of this field, (ii) the definition of a systematic procedure for guiding the task of building an intelligent trading system and (iii) a discussion about the main challenges and open problems in this scientific field. © 2016 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "dc1360563cb509c4213a68d2c9be56f1",
"text": "We present a novel efficient algorithm for portfolio selection which theoretically attains two desirable properties: 1. Worst-case guarantee: the algorithm is universal in the sense that it asymptotically performs almost as well as the best constant rebalanced portfolio determined in hindsight from the realized market prices. Furthermore, it attains the tightest known bounds on the regret, or the log-wealth difference relative to the best constant rebalanced portfolio. We prove that the regret of algorithm is bounded by O(logQ), where Q is the quadratic variation of the stock prices. This is the first improvement upon Cover’s [Cov91] seminal work that attains a regret bound of O(log T ), where T is the number of trading iterations. 2. Average-case guarantee: in the Geometric Brownian Motion (GBM) model of stock prices, our algorithm attains tighter regret bounds, which are provably impossible in the worst-case. Hence, when the GBM model is a good approximation of the behavior of market, the new algorithm has an advantage over previous ones, albeit retaining worst-case guarantees. We derive this algorithm as a special case of a novel and more general method for online convex optimization with exp-concave loss functions.",
"title": ""
},
{
"docid": "6cb2004d77c5a0ccb4f0cbab3058b2bc",
"text": "the field of optical character recognition.",
"title": ""
},
{
"docid": "4cfeef6e449e37219c75f8063220c1f8",
"text": "The 20 century was based on local linear engineering of complicated systems. We made cars, airplanes and chemical plants for example. The 21ot century has opened a new basis for holistic non-linear design of complex systems, such as the Internet, air traffic management and nanotechnologies. Complexity, interconnectivity, interaction and communication are major attributes of our evolving society. But, more interestingly, we have started to understand that chaos theories may be more important than reductionism, to better understand and thrive on our planet. Systems need to be investigated and tested as wholes, which requires a cross-disciplinary approach and new conceptual principles and tools. Consequently, schools cannot continue to teach isolated disciplines based on simple reductionism. Science; Technology, Engineering, and Mathematics (STEM) should be integrated together with the Arts to promote creativity together with rationalization, and move to STEAM (with an \"A\" for Arts). This new concept emphasizes the possibility of longer-term socio-technical futures instead of short-term financial predictions that currently lead to uncontrolled economies. Human-centered design (HCD) can contribute to improving STEAM education technologies, systems and practices. HCD not only provides tools and techniques to build useful and usable things, but also an integrated approach to learning by doing, expressing and critiquing, exploring possible futures, and understanding complex systems.",
"title": ""
},
{
"docid": "f779bf251b3d066e594867680e080ef4",
"text": "Machine Translation is area of research since six decades. It is gaining popularity since last decade due to better computational facilities available at personal computer systems. This paper presents different Machine Translation system where Sanskrit is involved as source, target or key support language. Researchers employ various techniques like Rule based, Corpus based, Direct for machine translation. The main aim to focus on Sanskrit in Machine Translation in this paper is to uncover the language suitability, its morphology and employ appropriate MT techniques.",
"title": ""
},
{
"docid": "f83d8a69a4078baf4048b207324e505f",
"text": "Low-dose computed tomography (LDCT) has attracted major attention in the medical imaging field, since CT-associated X-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in LDCT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN) architectures. This paper introduces a conveying path-based convolutional encoder-decoder (CPCE) network in 2-D and 3-D configurations within the GAN framework for LDCT denoising. A novel feature of this approach is that an initial 3-D CPCE denoising model can be directly obtained by extending a trained 2-D CNN, which is then fine-tuned to incorporate 3-D spatial information from adjacent slices. Based on the transfer learning from 2-D to 3-D, the 3-D network converges faster and achieves a better denoising performance when compared with a training from scratch. By comparing the CPCE network with recently published work based on the simulated Mayo data set and the real MGH data set, we demonstrate that the 3-D CPCE denoising model has a better performance in that it suppresses image noise and preserves subtle structures.",
"title": ""
}
] |
scidocsrr
|
f992ab9730adea9ef71dff62a2a962cb
|
A data mining framework for optimal product selection in retail supermarket data: the generalized PROFSET model
|
[
{
"docid": "74ef26e332b12329d8d83f80169de5c0",
"text": "It has been claimed that the discovery of association rules is well-suited for applications of market basket analysis to reveal regularities in the purchase behaviour of customers. Moreover, recent work indicates that the discovery of interesting rules can in fact only be addressed within a microeconomic framework. This study integrates the discovery of frequent itemsets with a (microeconomic) model for product selection (PROFSET). The model enables the integration of both quantitative and qualitative (domain knowledge) criteria. Sales transaction data from a fullyautomated convenience store is used to demonstrate the effectiveness of the model against a heuristic for product selection based on product-specific profitability. We show that with the use of frequent itemsets we are able to identify the cross-sales potential of product items and use this information for better product selection. Furthermore, we demonstrate that the impact of product assortment decisions on overall assortment profitability can easily be evaluated by means of sensitivity analysis.",
"title": ""
}
] |
[
{
"docid": "4c0c4b68cdfa1cf684eabfa20ee0b88b",
"text": "Orthogonal Frequency Division Multiplexing (OFDM) is an attractive technique for wireless communication over frequency-selective fading channels. OFDM suffers from high Peak-to-Average Power Ratio (PAPR), which limits OFDM usage and reduces the efficiency of High Power Amplifier (HPA) or badly degrades BER. Many PAPR reduction techniques have been proposed in the literature. PAPR reduction techniques can be classified into blind receiver and non-blind receiver techniques. Active Constellation Extension (ACE) is one of the best blind receiver techniques. While, Partial Transmit Sequence (PTS) can work as blind / non-blind technique. PTS has a great PAPR reduction gain on the expense of increasing computational complexity. In this paper we combine PTS with ACE in four possible ways to be suitable for blind receiver applications with better performance than conventional methods (i.e. PTS and ACE). Results show that ACE-PTS scheme is the best among others. Expectedly, any hybrid technique has computational complexity larger than that of its components. However, ACE-PTS can be used to achieve the same performance as that of PTS or worthy better, with less number of subblocks (i.e. with less computational complexity) especially in low order modulation techniques (e.g. 4-QAM and 16-QAM). Results show that ACE-PTS with V=8 can perform similar to or better than PTS with V=10 in 16-QAM or 4-QAM, respectively, with 74% and 40.5% reduction in required numbers of additions and multiplications, respectively.",
"title": ""
},
{
"docid": "1e868977ef9377d0dca9ba39b6ba5898",
"text": "During last decade, tremendous efforts have been devoted to the research of time series classification. Indeed, many previous works suggested that the simple nearest-neighbor classification is effective and difficult to beat. However, we usually need to determine the distance metric (e.g., Euclidean distance and Dynamic Time Warping) for different domains, and current evidence shows that there is no distance metric that is best for all time series data. Thus, the choice of distance metric has to be done empirically, which is time expensive and not always effective. To automatically determine the distance metric, in this paper, we investigate the distance metric learning and propose a novel Convolutional Nonlinear Neighbourhood Components Analysis model for time series classification. Specifically, our model performs supervised learning to project original time series into a transformed space. When classifying, nearest neighbor classifier is then performed in this transformed space. Finally, comprehensive experimental results demonstrate that our model can improve the classification accuracy to some extent, which indicates that it can learn a good distance metric.",
"title": ""
},
{
"docid": "b7b2049ef36bd778c32f505ee3b509e6",
"text": "The larger and longer body of multi-axle vehicle makes it difficult to steer as flexibly as usual. For this reason, a novel steering mode which combines traditional Ackerman steering and Skid steering method is proposed and the resulted turning characteristics is studied in this research. First, the research methods are identified by means of building and analysing a vehicle dynamical model. Then, the influence of rear-wheels' assisted steering on vehicle yaw rate, turning radius and wheel side-slip angle is analysed by solving a linear simplified model. An executive strategy of an additional yaw moment produced by rear-wheels during the vehicle steering at a relative lower speed is put forward. And a torque distribution method of rear-wheels is given. Finally, a comparison with all-wheel steering vehicles is made. It turned out that this steering mode can effectively decrease the turning radius or increase mobility and have an advantage over all-wheel steering.",
"title": ""
},
{
"docid": "037dcb40dff3d16a13843df2f618245c",
"text": "Deep convolutional neural networks (CNNs) can be applied to malware binary detection through images classification. The performance, however, is degraded due to the imbalance of malware families (classes). To mitigate this issue, we propose a simple yet effective weighted softmax loss which can be employed as the final layer of deep CNNs. The original softmax loss is weighted, and the weight value can be determined according to class size. A scaling parameter is also included in computing the weight. Proper selection of this parameter has been studied and an empirical option is given. The weighted loss aims at alleviating the impact of data imbalance in an end-to-end learning fashion. To validate the efficacy, we deploy the proposed weighted loss in a pre-trained deep CNN model and fine-tune it to achieve promising results on malware images classification. Extensive experiments also indicate that the new loss function can fit other typical CNNs with an improved classification performance. Keywords— Deep Learning, Malware Images, Convolutional Neural Networks, CNN, Image Classification, Imbalanced Data Classification, Softmaxloss",
"title": ""
},
{
"docid": "4f3936b753abd2265d867c0937aec24c",
"text": "A weighted constraint satisfaction problem (WCSP) is a constraint satisfaction problem in which preferences among solutions can be expressed. Bucket elimination is a complete technique commonly used to solve this kind of constraint satisfaction problem. When the memory required to apply bucket elimination is too high, a heuristic method based on it (denominated mini-buckets) can be used to calculate bounds for the optimal solution. Nevertheless, the curse of dimensionality makes these techniques impractical on large scale problems. In response to this situation, we present a memetic algorithm for WCSPs in which bucket elimination is used as a mechanism for recombining solutions, providing the best possible child from the parental set. Subsequently, a multi-level model in which this exact/metaheuristic hybrid is further hybridized with branch-and-bound techniques and mini-buckets is studied. As a case study, we have applied these algorithms to the resolution of the maximum density still life problem, a hard constraint optimization problem based on Conway’s game of life. The resulting algorithm consistently finds optimal patterns for up to date solved instances in less time than current approaches. Moreover, it is shown that this proposal provides new best known solutions for very large instances.",
"title": ""
},
{
"docid": "fc726cbc5f4c0b9faa47a52ca7e73f9a",
"text": "Osteoarthritis (OA) has long been considered a \"wear and tear\" disease leading to loss of cartilage. OA used to be considered the sole consequence of any process leading to increased pressure on one particular joint or fragility of cartilage matrix. Progress in molecular biology in the 1990s has profoundly modified this paradigm. The discovery that many soluble mediators such as cytokines or prostaglandins can increase the production of matrix metalloproteinases by chondrocytes led to the first steps of an \"inflammatory\" theory. However, it took a decade before synovitis was accepted as a critical feature of OA, and some studies are now opening the way to consider the condition a driver of the OA process. Recent experimental data have shown that subchondral bone may have a substantial role in the OA process, as a mechanical damper, as well as a source of inflammatory mediators implicated in the OA pain process and in the degradation of the deep layer of cartilage. Thus, initially considered cartilage driven, OA is a much more complex disease with inflammatory mediators released by cartilage, bone and synovium. Low-grade inflammation induced by the metabolic syndrome, innate immunity and inflammaging are some of the more recent arguments in favor of the inflammatory theory of OA and highlighted in this review.",
"title": ""
},
{
"docid": "5ee940efb443ee38eafbba9e0d14bdd2",
"text": "BACKGROUND\nThe stability of biochemical analytes has already been investigated, but results strongly differ depending on parameters, methodologies, and sample storage times. We investigated the stability for many biochemical parameters after different storage times of both whole blood and plasma, in order to define acceptable pre- and postcentrifugation delays in hospital laboratories.\n\n\nMETHODS\nTwenty-four analytes were measured (Modular® Roche analyzer) in plasma obtained from blood collected into lithium heparin gel tubes, after 2-6 hr of storage at room temperature either before (n = 28: stability in whole blood) or after (n = 21: stability in plasma) centrifugation. Variations in concentrations were expressed as mean bias from baseline, using the analytical change limit (ACL%) or the reference change value (RCV%) as acceptance limit.\n\n\nRESULTS\nIn tubes stored before centrifugation, mean plasma concentrations significantly decreased after 3 hr for phosphorus (-6.1% [95% CI: -7.4 to -4.7%]; ACL 4.62%) and lactate dehydrogenase (LDH; -5.7% [95% CI: -7.4 to -4.1%]; ACL 5.17%), and slightly decreased after 6 hr for potassium (-2.9% [95% CI: -5.3 to -0.5%]; ACL 4.13%). In plasma stored after centrifugation, mean concentrations decreased after 6 hr for bicarbonates (-19.7% [95% CI: -22.9 to -16.5%]; ACL 15.4%), and moderately increased after 4 hr for LDH (+6.0% [95% CI: +4.3 to +7.6%]; ACL 5.17%). Based on RCV, all the analytes can be considered stable up to 6 hr, whether before or after centrifugation.\n\n\nCONCLUSION\nThis study proposes acceptable delays for most biochemical tests on lithium heparin gel tubes arriving at the laboratory or needing to be reanalyzed.",
"title": ""
},
{
"docid": "f5a188c87dd38a0a68612352891bcc3f",
"text": "Sentiment analysis of online documents such as news articles, blogs and microblogs has received increasing attention in recent years. In this article, we propose an efficient algorithm and three pruning strategies to automatically build a word-level emotional dictionary for social emotion detection. In the dictionary, each word is associated with the distribution on a series of human emotions. In addition, a method based on topic modeling is proposed to construct a topic-level dictionary, where each topic is correlated with social emotions. Experiment on the real-world data sets has validated the effectiveness and reliability of the methods. Compared with other lexicons, the dictionary generated using our approach is language-independent, fine-grained, and volume-unlimited. The generated dictionary has a wide range of applications, including predicting the emotional distribution of news articles, identifying social emotions on certain entities and news events.",
"title": ""
},
{
"docid": "0bd7956dbee066a5b7daf4cbd5926f35",
"text": "Computer networks lack a general control paradigm, as traditional networks do not provide any networkwide management abstractions. As a result, each new function (such as routing) must provide its own state distribution, element discovery, and failure recovery mechanisms. We believe this lack of a common control platform has significantly hindered the development of flexible, reliable and feature-rich network control planes. To address this, we present Onix, a platform on top of which a network control plane can be implemented as a distributed system. Control planes written within Onix operate on a global view of the network, and use basic state distribution primitives provided by the platform. Thus Onix provides a general API for control plane implementations, while allowing them to make their own trade-offs among consistency, durability, and scalability.",
"title": ""
},
{
"docid": "40043360644ded6950e1f46bd2caaf96",
"text": "Recently, there has been a rapidly growing interest in deep learning research and their applications to real-world problems. In this paper, we aim at evaluating and comparing LSTM deep learning architectures for short-and long-term prediction of financial time series. This problem is often considered as one of the most challenging real-world applications for time-series prediction. Unlike traditional recurrent neural networks, LSTM supports time steps of arbitrary sizes and without the vanishing gradient problem. We consider both bidirectional and stacked LSTM predictive models in our experiments and also benchmark them with shallow neural networks and simple forms of LSTM networks. The evaluations are conducted using a publicly available dataset for stock market closing prices.",
"title": ""
},
{
"docid": "a3be253034ffcf61a25ad265fda1d4ff",
"text": "With the development of automated logistics systems, flexible manufacture systems (FMS) and unmanned automated factories, the application of automated guided vehicle (AGV) gradually become more important to improve production efficiency and logistics automatism for enterprises. The development of the AGV systems play an important role in reducing labor cost, improving working conditions, unifying information flow and logistics. Path planning has been a key issue in AGV control system. In this paper, two key problems, shortest time path planning and collision in multi AGV have been solved. An improved A-Star (A*) algorithm is proposed, which introduces factors of turning, and edge removal based on the improved A* algorithm is adopted to solve k shortest path problem. Meanwhile, a dynamic path planning method based on A* algorithm which searches effectively the shortest-time path and avoids collision has been presented. Finally, simulation and experiment have been conducted to prove the feasibility of the algorithm.",
"title": ""
},
{
"docid": "8c80b8b0e00fa6163d945f7b1b8f63e5",
"text": "In this paper, we propose an architecture model called Design Rule Space (DRSpace). We model the architecture of a software system as multiple overlapping DRSpaces, reflecting the fact that any complex software system must contain multiple aspects, features, patterns, etc. We show that this model provides new ways to analyze software quality. In particular, we introduce an Architecture Root detection algorithm that captures DRSpaces containing large numbers of a project’s bug-prone files, which are called Architecture Roots (ArchRoots). After investigating ArchRoots calculated from 15 open source projects, the following observations become clear: from 35% to 91% of a project’s most bug-prone files can be captured by just 5 ArchRoots, meaning that bug-prone files are likely to be architecturally connected. Furthermore, these ArchRoots tend to live in the system for significant periods of time, serving as the major source of bug-proneness and high maintainability costs. Moreover, each ArchRoot reveals multiple architectural flaws that propagate bugs among files and this will incur high maintenance costs over time. The implication of our study is that the quality, in terms of bug-proneness, of a large, complex software project cannot be fundamentally improved without first fixing its architectural flaws.",
"title": ""
},
{
"docid": "d2928d8227544e8251818f06099b17fd",
"text": "Driven by the dominance of the relational model, the requirements of modern applications, and the veracity of data, we revisit the fundamental notion of a key in relational databases with NULLs. In SQL database systems primary key columns are NOT NULL by default. NULL columns may occur in unique constraints which only guarantee uniqueness for tuples which do not feature null markers in any of the columns involved, and therefore serve a different function than primary keys. We investigate the notions of possible and certain keys, which are keys that hold in some or all possible worlds that can originate from an SQL table, respectively. Possible keys coincide with the unique constraint of SQL, and thus provide a semantics for their syntactic definition in the SQL standard. Certain keys extend primary keys to include NULL columns, and thus form a sufficient and necessary condition to identify tuples uniquely, while primary keys are only sufficient for that purpose. In addition to basic characterization, axiomatization, and simple discovery approaches for possible and certain keys, we investigate the existence and construction of Armstrong tables, and describe an indexing scheme for enforcing certain keys. Our experiments show that certain keys with NULLs do occur in real-world databases, and that related computational problems can be solved efficiently. Certain keys are therefore semantically well-founded and able to maintain data quality in the form of Codd’s entity integrity rule while handling the requirements of modern applications, that is, higher volumes of incomplete data from different formats.",
"title": ""
},
{
"docid": "c2d17d5a5db10efafa4e56a2b6cd7afa",
"text": "The main purpose of analyzing the social network data is to observe the behaviors and trends that are followed by people. How people interact with each other, what they usually share, what are their interests on social networks, so that analysts can focus new trends for the provision of those things which are of great interest for people so in this paper an easy approach of gathering and analyzing data through keyword based search in social networks is examined using NodeXL and data is gathered from twitter in which political trends have been analyzed. As a result it will be analyzed that, what people are focusing most in politics.",
"title": ""
},
{
"docid": "6f6706ee6f54d71a172c43403cdb6135",
"text": "Stator dc-excited vernier reluctance machines (dc-VRMs) are a kind of a novel vernier reluctance synchronous machine that employs doubly salient structures; their innovations include stator concentrated dc windings to generate the exciting field. Compared with the rotor wound field machines or stator/rotor PM synchronous machines, these machines are characterized by low cost due to the absence of PMs, a robust rotor structure, and a wide speed range resulting from the flexible stator dc exciting field. In this paper, with the proposed phasor diagram, the power factor of dc-VRMs is analyzed analytically and with the finite-element analysis, and the analysis results are confirmed with the experiment. It is found that, with constant slot sizes and slot fill, the power factor is mainly dependent on the ratio of the dc current to the armature winding current and also the ratio of the armature synchronous inductance to the mutual inductance between the field winding and the armature winding. However, torque will be sacrificed if measures are taken to further improve the power factor.",
"title": ""
},
{
"docid": "051d402ce90d7d326cc567e228c8411f",
"text": "CDM ESD event has become the main ESD reliability concern for integrated-circuits products using nanoscale CMOS technology. A novel CDM ESD protection design, using self-biased current trigger (SBCT) and source pumping, has been proposed and successfully verified in 0.13-lm CMOS technology to achieve 1-kV CDM ESD robustness. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ea29b3421c36178680ae63c16b9cecad",
"text": "Traffic engineering under OSPF routes along the shortest paths, which may cause network congestion. Software Defined Networking (SDN) is an emerging network architecture which exerts a separation between the control plane and the data plane. The SDN controller can centrally control the network state through modifying the flow tables maintained by routers. Network operators can flexibly split arbitrary flows to outgoing links through the deployment of the SDN. However, SDN has its own challenges of full deployment, which makes the full deployment of SDN difficult in the short term. In this paper, we explore the traffic engineering in a SDN/OSPF hybrid network. In our scenario, the OSPF weights and flow splitting ratio of the SDN nodes can both be changed. The controller can arbitrarily split the flows coming into the SDN nodes. The regular nodes still run OSPF. Our contribution is that we propose a novel algorithm called SOTE that can obtain a lower maximum link utilization. We reap a greater benefit compared with the results of the OSPF network and the SDN/OSPF hybrid network with fixed weight setting. We also find that when only 30% of the SDN nodes are deployed, we can obtain a near optimal performance.",
"title": ""
},
{
"docid": "99ea986731bd262e1b6380d1baac62c4",
"text": "A patient of 58 years of age without medical problems came to the clinic due to missing teeth in the upper posterior region and to change the partial fixed prosthesis in the upper anterior area. Proposed treatment: surgical phase of three conical shape tapering implants with prosthetic platform in occlusal direction with mechanize collar tissue level with fixtures to place implant-supported metal-ceramic restorations. In the anterior area, a zirconium oxide fixed partial prosthesis was vertical preparation of the tooth's. When preparing teeth to receive fixed prostheses, the definition and shape of finish lines has been a subject of endless discussion, modification, and change ever since the beginnings of restorative prosthetic dentistry. The BOPT technique (biologically oriented preparation technique) was first described in the context of tooth-supported restorations but has recently been applied to dental implants with the aim of ensuring healthy peri-implant tissue and creating the possibility of modeling the peri-implant sulcus by modifying prosthetic emergence profiles. Vertical preparation of teeth and abutments without finish line on implants is a technique which was found to be adequate for ensuring the remodeling and stability of peri-implant tissues. Key words:Peri-implant tissue health, shoulderless abutments.",
"title": ""
},
{
"docid": "cdbdd1a6cd129b42065183a6f7fc5bc9",
"text": "Many methods designed to create defenses against distributed denial of service (DDoS) attacks are focused on the IP and TCP layers instead of the high layer. They are not suitable for handling the new type of attack which is based on the application layer. In this paper, we introduce a new scheme to achieve early attack detection and filtering for the application-layer-based DDoS attack. An extended hidden semi-Markov model is proposed to describe the browsing behaviors of web surfers. In order to reduce the computational amount introduced by the model's large state space, a novel forward algorithm is derived for the online implementation of the model based on the M-algorithm. Entropy of the user's HTTP request sequence fitting to the model is used as a criterion to measure the user's normality. Finally, experiments are conducted to validate our model and algorithm.",
"title": ""
},
{
"docid": "c83d034e052926520677d0c5880f8800",
"text": "Sperm vitality is a reflection of the proportion of live, membrane-intact spermatozoa determined by either dye exclusion or osmoregulatory capacity under hypo-osmotic conditions. In this chapter we address the two most common methods of sperm vitality assessment: eosin-nigrosin staining and the hypo-osmotic swelling test, both utilized in clinical Andrology laboratories.",
"title": ""
}
] |
scidocsrr
|
0ab5807f327a31b0e377e1510445b1fd
|
Processing performance on Apache Pig, Apache Hive and MySQL cluster
|
[
{
"docid": "3cab403ffab3e44252174ab5d7d985f8",
"text": "A prominent parallel data processing tool MapReduce is gaining significant momentum from both industry and academia as the volume of data to analyze grows rapidly. While MapReduce is used in many areas where massive data analysis is required, there are still debates on its performance, efficiency per node, and simple abstraction. This survey intends to assist the database and open source communities in understanding various technical aspects of the MapReduce framework. In this survey, we characterize the MapReduce framework and discuss its inherent pros and cons. We then introduce its optimization strategies reported in the recent literature. We also discuss the open issues and challenges raised on parallel data analysis with MapReduce.",
"title": ""
},
{
"docid": "cd35602ecb9546eb0f9a0da5f6ae2fdf",
"text": "The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [3] is a popular open-source map-reduce implementation which is being used as an alternative to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this paper, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language HiveQL, which are compiled into map-reduce jobs executed on Hadoop. In addition, HiveQL supports custom map-reduce scripts to be plugged into queries. The language includes a type system with support for tables containing primitive types, collections like arrays and maps, and nested compositions of the same. The underlying IO libraries can be extended to query data in custom formats. Hive also includes a system catalog, Hive-Metastore, containing schemas and statistics, which is useful in data exploration and query optimization. In Facebook, the Hive warehouse contains several thousand tables with over 700 terabytes of data and is being used extensively for both reporting and ad-hoc analyses by more than 100 users. The rest of the paper is organized as follows. Section 2 describes the Hive data model and the HiveQL language with an example. Section 3 describes the Hive system architecture and an overview of the query life cycle. Section 4 provides a walk-through of the demonstration. We conclude with future work in Section 5.",
"title": ""
},
{
"docid": "25adc988a57d82ae6de7307d1de5bf71",
"text": "The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [1] is a popular open-source map-reduce implementation which is being used in companies like Yahoo, Facebook etc. to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this paper, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language - HiveQL, which are compiled into map-reduce jobs that are executed using Hadoop. In addition, HiveQL enables users to plug in custom map-reduce scripts into queries. The language includes a type system with support for tables containing primitive types, collections like arrays and maps, and nested compositions of the same. The underlying IO libraries can be extended to query data in custom formats. Hive also includes a system catalog - Metastore - that contains schemas and statistics, which are useful in data exploration, query optimization and query compilation. In Facebook, the Hive warehouse contains tens of thousands of tables and stores over 700TB of data and is being used extensively for both reporting and ad-hoc analyses by more than 200 users per month.",
"title": ""
}
] |
[
{
"docid": "8086d70f97bd300002bb4ef7e60e8f9c",
"text": "In this paper, we present and investigate a model for solid tumor growth that incorporates features of the tumor microenvironment. Using analysis and nonlinear numerical simulations, we explore the effects of the interaction between the genetic characteristics of the tumor and the tumor microenvironment on the resulting tumor progression and morphology. We find that the range of morphological responses can be placed in three categories that depend primarily upon the tumor microenvironment: tissue invasion via fragmentation due to a hypoxic microenvironment; fingering, invasive growth into nutrient rich, biomechanically unresponsive tissue; and compact growth into nutrient rich, biomechanically responsive tissue. We found that the qualitative behavior of the tumor morphologies was similar across a broad range of parameters that govern the tumor genetic characteristics. Our findings demonstrate the importance of the impact of microenvironment on tumor growth and morphology and have important implications for cancer therapy. In particular, if a treatment impairs nutrient transport in the external tissue (e.g., by anti-angiogenic therapy) increased tumor fragmentation may result, and therapy-induced changes to the biomechanical properties of the tumor or the microenvironment (e.g., anti-invasion therapy) may push the tumor in or out of the invasive fingering regime.",
"title": ""
},
{
"docid": "c47f7e2128c89173d8a75271d0a488ff",
"text": "Dependence on computers to store and process sensitive information has made it necessary to secure them from intruders. A behavioral biometric such as keystroke dynamics which makes use of the typing cadence of an individual can be used to strengthen existing security techniques effectively and cheaply. Due to the ballistic (semi-autonomous) nature of the typing behavior it is difficult to impersonate, making it useful as a biometric. Therefore in this paper, we provide a basic background of the psychological basis behind the use of keystroke dynamics. We also discuss the data acquisition methods, approaches and the performance of the methods used by researchers on standard computer keyboards. In this survey, we find that the use and acceptance of this biometric could be increased by development of standardized databases, assignment of nomenclature for features, development of common data interchange formats, establishment of protocols for evaluating methods, and resolution of privacy issues.",
"title": ""
},
{
"docid": "de8f5656f17151c43e2454aa7b8f929f",
"text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading concrete mathematics a foundation for computer science is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.",
"title": ""
},
{
"docid": "d071c70b85b10a62538d73c7272f5d99",
"text": "The Amaryllidaceae alkaloids represent a large (over 300 alkaloids have been isolated) and still expanding group of biogenetically related isoquinoline alkaloids that are found exclusively in plants belonging to this family. In spite of their great variety of pharmacological and/or biological properties, only galanthamine is used therapeutically. First isolated from Galanthus species, this alkaloid is a long-acting, selective, reversible and competitive inhibitor of acetylcholinesterase, and is used for the treatment of Alzheimer’s disease. Other Amaryllidaceae alkaloids of pharmacological interest will also be described in this chapter.",
"title": ""
},
{
"docid": "2ebb21cb1c6982d2d3839e2616cac839",
"text": "In order to reduce micromouse dashing time in complex maze, and improve micromouse’s stability in high speed dashing, diagonal dashing method was proposed. Considering the actual dashing trajectory of micromouse in diagonal path, the path was decomposed into three different trajectories; Fully consider turning in and turning out of micromouse dashing action in diagonal, leading and passing of the every turning was used to realize micromouse posture adjustment, with the help of accelerometer sensor ADXL202, rotation angle error compensation was done and the micromouse realized its precise position correction; For the diagonal dashing, front sensor S1,S6 and accelerometer sensor ADXL202 were used to ensure micromouse dashing posture. Principle of new diagonal dashing method is verified by micromouse based on STM32F103. Experiments of micromouse dashing show that diagonal dashing method can greatly improve its stability, and also can reduce its dashing time in complex maze.",
"title": ""
},
{
"docid": "ac5319160d1444ab688a90f9ccf03c45",
"text": "In this paper we present a novel vision-based markerless hand pose estimation scheme with the input of depth image sequences. The proposed scheme exploits both temporal constraints and spatial features of the input sequence, and focuses on hand parsing and 3D fingertip localization for hand pose estimation. The hand parsing algorithm incorporates a novel spatial-temporal feature into a Bayesian inference framework to assign the correct label to each image pixel. The 3D fingertip localization algorithm adapts a recently developed geodesic extrema extraction method to fingertip detection with the hand parsing algorithm, a novel path-reweighting method and K-means clustering in metric space. The detected 3D fingertip locations are finally used for hand pose estimation with an inverse kinematics solver. Quantitative experiments on synthetic data show the proposed hand pose estimation scheme can accurately capture the natural hand motion. A simulated water-oscillator application is also built to demonstrate the effectiveness of the proposed method in human-computer interaction scenarios.",
"title": ""
},
{
"docid": "274f9e9f20a7ba3b29a5ab939aea68a2",
"text": "Clustering validation is a long standing challenge in the clustering literature. While many validation measures have been developed for evaluating the performance of clustering algorithms, these measures often provide inconsistent information about the clustering performance and the best suitable measures to use in practice remain unknown. This paper thus fills this crucial void by giving an organized study of 16 external validation measures for K-means clustering. Specifically, we first introduce the importance of measure normalization in the evaluation of the clustering performance on data with imbalanced class distributions. We also provide normalization solutions for several measures. In addition, we summarize the major properties of these external measures. These properties can serve as the guidance for the selection of validation measures in different application scenarios. Finally, we reveal the interrelationships among these external measures. By mathematical transformation, we show that some validation measures are equivalent. Also, some measures have consistent validation performances. Most importantly, we provide a guide line to select the most suitable validation measures for K-means clustering.",
"title": ""
},
{
"docid": "cf2c8ab1b22ae1a33e9235a35f942e7e",
"text": "Adversarial attacks against neural networks are a problem of considerable importance, for which effective defenses are not yet readily available. We make progress toward this problem by showing that non-negative weight constraints can be used to improve resistance in specific scenarios. In particular, we show that they can provide an effective defense for binary classification problems with asymmetric cost, such as malware or spam detection. We also show the potential for non-negativity to be helpful to non-binary problems by applying it to image",
"title": ""
},
{
"docid": "5bac6135af1c6014352d6ce5e91ec8d3",
"text": "Acute necrotizing fasciitis (NF) in children is a dangerous illness characterized by progressive necrosis of the skin and subcutaneous tissue. The present study summarizes our recent experience with the treatment of pediatric patients with severe NF. Between 2000 and 2009, eight children suffering from NF were admitted to our department. Four of the children received an active treatment strategy including continuous renal replacement therapy (CRRT), radical debridement, and broad-spectrum antibiotics. Another four children presented at a late stage of illness, and did not complete treatment. Clinical data for these two patient groups were retrospectively analyzed. The four patients that completed CRRT, radical debridement, and a course of broad-spectrum antibiotics were cured without any significant residual morbidity. The other four infants died shortly after admission. Early diagnosis, timely debridement, and aggressive use of broad-spectrum antibiotics are key factors for achieving a satisfactory outcome for cases of acute NF. Early intervention with CRRT to prevent septic shock may also improve patient outcome.",
"title": ""
},
{
"docid": "785b1e2b8cf185c0ffa044d62309c711",
"text": "Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN’s size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. We present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1–7.3× convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers. We also open source our project at https://github.com/IntelLabs/SkimCaffe.",
"title": ""
},
{
"docid": "5ee940efb443ee38eafbba9e0d14bdd2",
"text": "BACKGROUND\nThe stability of biochemical analytes has already been investigated, but results strongly differ depending on parameters, methodologies, and sample storage times. We investigated the stability for many biochemical parameters after different storage times of both whole blood and plasma, in order to define acceptable pre- and postcentrifugation delays in hospital laboratories.\n\n\nMETHODS\nTwenty-four analytes were measured (Modular® Roche analyzer) in plasma obtained from blood collected into lithium heparin gel tubes, after 2-6 hr of storage at room temperature either before (n = 28: stability in whole blood) or after (n = 21: stability in plasma) centrifugation. Variations in concentrations were expressed as mean bias from baseline, using the analytical change limit (ACL%) or the reference change value (RCV%) as acceptance limit.\n\n\nRESULTS\nIn tubes stored before centrifugation, mean plasma concentrations significantly decreased after 3 hr for phosphorus (-6.1% [95% CI: -7.4 to -4.7%]; ACL 4.62%) and lactate dehydrogenase (LDH; -5.7% [95% CI: -7.4 to -4.1%]; ACL 5.17%), and slightly decreased after 6 hr for potassium (-2.9% [95% CI: -5.3 to -0.5%]; ACL 4.13%). In plasma stored after centrifugation, mean concentrations decreased after 6 hr for bicarbonates (-19.7% [95% CI: -22.9 to -16.5%]; ACL 15.4%), and moderately increased after 4 hr for LDH (+6.0% [95% CI: +4.3 to +7.6%]; ACL 5.17%). Based on RCV, all the analytes can be considered stable up to 6 hr, whether before or after centrifugation.\n\n\nCONCLUSION\nThis study proposes acceptable delays for most biochemical tests on lithium heparin gel tubes arriving at the laboratory or needing to be reanalyzed.",
"title": ""
},
{
"docid": "d7c8170b0926cf12ca8dfee1b87ba898",
"text": "The representation of a knowledge graph (KG) in a latent space recently has attracted more and more attention. To this end, some proposed models (e.g., TransE) embed entities and relations of a KG into a \"point\" vector space by optimizing a global loss function which ensures the scores of positive triplets are higher than negative ones. We notice that these models always regard all entities and relations in a same manner and ignore their (un)certainties. In fact, different entities and relations may contain different certainties, which makes identical certainty insufficient for modeling. Therefore, this paper switches to density-based embedding and propose KG2E for explicitly modeling the certainty of entities and relations, which learn the representations of KGs in the space of multi-dimensional Gaussian distributions. Each entity/relation is represented by a Gaussian distribution, where the mean denotes its position and the covariance (currently with diagonal covariance) can properly represent its certainty. In addition, compared with the symmetric measures used in point-based methods, we employ the KL-divergence for scoring triplets, which is a natural asymmetry function for effectively modeling multiple types of relations. We have conducted extensive experiments on link prediction and triplet classification with multiple benchmark datasets (WordNet and Freebase). Our experimental results demonstrate that our method can effectively model the (un)certainties of entities and relations in a KG, and it significantly outperforms state-of-the-art methods (including TransH and TransR).",
"title": ""
},
{
"docid": "e34815efa68cb1b7a269e436c838253d",
"text": "A new mobile robot prototype for inspection of overhead transmission lines is proposed. The mobile platform is composed of 3 arms. And there is a motorized rubber wheel on the end of each arm. On the two end arms, a gripper is designed to clamp firmly onto the conductors from below to secure the robot. Each arm has a motor to achieve 2 degrees of freedom which is realized by moving along a curve. It could roll over some obstacles (compression splices, vibration dampers, etc). And the robot could clear other types of obstacles (spacers, suspension clamps, etc).",
"title": ""
},
{
"docid": "1ec395dbe807ff883dab413419ceef56",
"text": "\"The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure\" provides a new guideline for hypertension prevention and management. The following are the key messages(1) In persons older than 50 years, systolic blood pressure (BP) of more than 140 mm Hg is a much more important cardiovascular disease (CVD) risk factor than diastolic BP; (2) The risk of CVD, beginning at 115/75 mm Hg, doubles with each increment of 20/10 mm Hg; individuals who are normotensive at 55 years of age have a 90% lifetime risk for developing hypertension; (3) Individuals with a systolic BP of 120 to 139 mm Hg or a diastolic BP of 80 to 89 mm Hg should be considered as prehypertensive and require health-promoting lifestyle modifications to prevent CVD; (4) Thiazide-type diuretics should be used in drug treatment for most patients with uncomplicated hypertension, either alone or combined with drugs from other classes. Certain high-risk conditions are compelling indications for the initial use of other antihypertensive drug classes (angiotensin-converting enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, calcium channel blockers); (5) Most patients with hypertension will require 2 or more antihypertensive medications to achieve goal BP (<140/90 mm Hg, or <130/80 mm Hg for patients with diabetes or chronic kidney disease); (6) If BP is more than 20/10 mm Hg above goal BP, consideration should be given to initiating therapy with 2 agents, 1 of which usually should be a thiazide-type diuretic; and (7) The most effective therapy prescribed by the most careful clinician will control hypertension only if patients are motivated. Motivation improves when patients have positive experiences with and trust in the clinician. Empathy builds trust and is a potent motivator. Finally, in presenting these guidelines, the committee recognizes that the responsible physician's judgment remains paramount.",
"title": ""
},
{
"docid": "e8bf03ec53323bb8271a42c2d4602f62",
"text": "UNLABELLED\nCommunity intervention programmes to reduce cardiovascular disease (CVD) risk factors within urban communities in developing countries are rare. One possible explanation is the difficulty of designing an intervention that corresponds to the local context and culture.\n\n\nOBJECTIVES\nTo understand people's perceptions of health and CVD, and how people prevent CVD in an urban setting in Yogyakarta, Indonesia.\n\n\nMETHODS\nA qualitative study was performed through focus group discussions and individual research interviews. Participants were selected purposively in terms of socio-economic status (SES), lay people, community leaders and government officers. Data were analysed by using content analysis.\n\n\nRESULTS\nSEVEN CATEGORIES WERE IDENTIFIED: (1) heart disease is dangerous, (2) the cause of heart disease, (3) men have no time for health, (4) women are caretakers for health, (5) different information-seeking patterns, (6) the role of community leaders and (7) patterns of lay people's action. Each category consists of sub-categories according to the SES of participants. The main theme that emerged was one of balance and harmony, indicating the necessity of assuring a balance between 'good' and 'bad' habits.\n\n\nCONCLUSIONS\nThe basic concepts of balance and harmony, which differ between low and high SES groups, must be understood when tailoring community interventions to reduce CVD risk factors.",
"title": ""
},
{
"docid": "23d2349831a364e6b77e3c263a8321c8",
"text": "lmost a decade has passed since we started advocating a process of usability design [20-22]. This article is a status report about the value of this process and, mainly, a description of new ideas for enhancing the use of the process. We first note that, when followed , the process leads to usable, useful, likeable computer systems and applications. Nevertheless, experience and observational evidence show that (because of the way development work is organized and carried out) the process is often not followed, despite designers' enthusiasm and motivation to do so. To get around these organizational and technical obstacles, we propose a) greater reliance on existing methodologies for establishing test-able usability and productivity-enhancing goals; b) a new method for identifying and focuging attention on long-term, trends about the effects that computer applications have on end-user productivity; and c) a new approach, now under way, to application development, particularly the development of user interfaces. The process consists of four activities [18, 20-22]. Early Focus On Users. Designers should have direct contact with intended or actual users-via interviews , observations, surveys, partic-ipatory design. The aim is to understand users' cognitive, behav-ioral, attitudinal, and anthropomet-ric characteristics-and the characteristics of the jobs they will be doing. Integrated Design. All aspects of usability (e.g., user interface, help system, training plan, documentation) should evolve in parallel, rather than be defined sequentially, and should be under one management. Early~And Continual~User Testing. The only presently feasible approach to successful design is an empirical one, requiring observation and measurement of user behavior , careful evaluation of feedback , insightful solutions to existing problems, and strong motivation to make design changes. Iterative Design. A system under development must be modified based upon the results of behav-ioral tests of functions, user interface , help system, documentation, training approach. This process of implementation, testing, feedback, evaluation, and change must be repeated to iteratively improve the system. We, and others proposing similar ideas (see below), have worked hard at spreading this process of usabil-ity design. We have used numerous channels to accomplish this: frequent talks, workshops, seminars, publications, consulting, addressing arguments used against it [22], conducting a direct case study of the process [20], and identifying methods for people not fully trained as human factors professionals to use in carrying out this process [18]. The Process Works. Several lines of evidence indicate that this usabil-ity design process leads to systems, applications, and products …",
"title": ""
},
{
"docid": "4b2d4ac1be5eeec4a7e370dfa768a5af",
"text": "A new technology evaluation of fingerprint verification algorithms has been organized following the approach of the previous FVC2000 and FVC2002 evaluations, with the aim of tracking the quickly evolving state-ofthe-art of fingerprint recognition systems. Three sensors have been used for data collection, including a solid state sweeping sensor, and two optical sensors of different characteristics. The competition included a new category dedicated to “ light” systems, characterized by limited computational and storage resources. This paper summarizes the main activities of the FVC2004 organization and provides a first overview of the evaluation. Results will be further elaborated and officially presented at the International Conference on Biometric Authentication (Hong Kong) on July 2004.",
"title": ""
},
{
"docid": "4419d61684dff89f4678afe3b8dc06e0",
"text": "Reason and emotion have long been considered opposing forces. However, recent psychological and neuroscientific research has revealed that emotion and cognition are closely intertwined. Cognitive processing is needed to elicit emotional responses. At the same time, emotional responses modulate and guide cognition to enable adaptive responses to the environment. Emotion determines how we perceive our world, organise our memory, and make important decisions. In this review, we provide an overview of current theorising and research in the Affective Sciences. We describe how psychological theories of emotion conceptualise the interactions of cognitive and emotional processes. We then review recent research investigating how emotion impacts our perception, attention, memory, and decision-making. Drawing on studies with both healthy participants and clinical populations, we illustrate the mechanisms and neural substrates underlying the interactions of cognition and emotion.",
"title": ""
},
{
"docid": "9e0ebe084cb9ed489c76dac9741ea08e",
"text": "THIS PAPER OFFERS ten common sense principles that will help project managers define goals, establish checkpoints, schedules, and resource requirements, motivate and empower team members, facilitate communication, and manage conflict.",
"title": ""
},
{
"docid": "5fde7006ec6f7cf4f945b234157e5791",
"text": "In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.",
"title": ""
}
] |
scidocsrr
|
faa6a8931557a84ec26cdb8d2eb61b1e
|
Knowledge Sharing Behaviour in the Public Sector : the Business Process Management Perspectives
|
[
{
"docid": "916051a69190e66239f7eeed3c745578",
"text": "This paper contributes to our understanding of an increasingly important practical problem, namely the effectiveness of knowledge management in organizations. As with many other managerial innovations, knowledge management appears to have been adopted firstly by manufacturing firms, and is only now beginning to permeate the service sector, predominantly in professional services such as consulting (Hansen et al., 1999; Sarvary, 1999). Public services, traditionally slower to embrace innovative management practices, are only beginning to recognize the importance of knowledge management. There is, as yet, little published research of its implementation in this context (Bate & Robert, 2002). ABSTRACT",
"title": ""
}
] |
[
{
"docid": "38fbb369861df73a91bd816e0c22cb2a",
"text": "This paper describes a hybrid stock trading system based on Genetic Network Programming (GNP) and Mean Conditional Value-at-Risk Model (GNP–CVaR). The proposed method, combining the advantages of evolutionary algorithms and statistical model, has provided useful tools to construct portfolios and generate effective stock trading strategies for investors with different risk-attitudes. Simulation results on five stock indices show that model based on GNP and maximum Sharpe Ratio portfolio performs the best in bull market, and that based on GNP and the global minimum risk portfolio performs the best in bear market. The portfolios constructed by Markowitz’s mean–variance model performs the same as mean-CVaR model. It is clarified that the proposed system significantly improves the function and efficiency of original GNP, which can help investors make profitable decisions. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dec0ff25de96faef92f9221085aba523",
"text": "Atopic dermatitis (AD) is characterized by allergic skin inflammation. A hallmark of AD is dry itchy skin due, at least in part, to defects in skin genes that are important for maintaining barrier function. The pathogenesis of AD remains incompletely understood. Since the description of the Nc/Nga mouse as a spontaneously occurring model of AD, a number of other mouse models of AD have been developed. They can be categorized into three groups: (1) models induced by epicutaneous application of sensitizers; (2) transgenic mice that either overexpress or lack selective molecules; (3) mice that spontaneously develop AD-like skin lesions. These models have resulted in a better understanding of the pathogenesis of AD. This review discusses these models and emphasizes the role of mechanical skin injury and skin barrier dysfunction in eliciting allergic skin inflammation.",
"title": ""
},
{
"docid": "3baf8d673b5ecf130cf770019aaa3e3c",
"text": "Fuzzy logic may be considered as an assortment of decision making techniques. In many applications like process control, the algorithm’s outcome is ruled by a number of key decisions which are made in the algorithm. Defining the best decision requires extensive knowledge of the system. When experience or understanding of the problem is not available, optimising the algorithm becomes very difficult. This is the reason why fuzzy logic is useful.",
"title": ""
},
{
"docid": "443652d4a9d96eedd832c5dbb3b41f0a",
"text": "This paper presents a rigorous analytical model for analyzing the effects of local oscillator output imperfections such as phase/amplitude imbalances and phase noise on M -ary quadrature amplitude modulation (M-QAM) transceiver performance. A closed-form expression of the error vector magnitude (EVM) and an analytic expression of the symbol error rate (SER) are derived considering a single-carrier linear transceiver link with additive white Gaussian noise channel. The proposed analytical model achieves a good agreement with the simulation results based on the Monte Carlo method. The proposed QAM imperfection analysis model provides an efficient means for system and circuit designers to analyze the wireless transceiver performance and specify the transceiver block specifications.",
"title": ""
},
{
"docid": "893683af36eea6e8ab03e3dcd1429ad4",
"text": "Obtaining a good baseline between different video frames is one of the key elements in vision-based monocular SLAM systems. However, if the video frames contain only a few 2D feature correspondences with a good baseline, or the camera only rotates without sufficient translation in the beginning, tracking and mapping becomes unstable. We introduce a real-time visual SLAM system that incrementally tracks individual 2D features, and estimates camera pose by using matched 2D features, regardless of the length of the baseline. Triangulating 2D features into 3D points is deferred until key frames with sufficient baseline for the features are available. Our method can also deal with pure rotational motions, and fuse the two types of measurements in a bundle adjustment step. Adaptive criteria for key frame selection are also introduced for efficient optimization and dealing with multiple maps. We demonstrate that our SLAM system improves camera pose estimates and robustness, even with purely rotational motions.",
"title": ""
},
{
"docid": "9b2291ef3e605d85b6d0dba326aa10ef",
"text": "We propose a multi-objective method for avoiding premature convergence in evolutionary algorithms, and demonstrate a three-fold performance improvement over comparable methods. Previous research has shown that partitioning an evolving population into age groups can greatly improve the ability to identify global optima and avoid converging to local optima. Here, we propose that treating age as an explicit optimization criterion can increase performance even further, with fewer algorithm implementation parameters. The proposed method evolves a population on the two-dimensional Pareto front comprising (a) how long the genotype has been in the population (age); and (b) its performance (fitness). We compare this approach with previous approaches on the Symbolic Regression problem, sweeping the problem difficulty over a range of solution complexities and number of variables. Our results indicate that the multi-objective approach identifies the exact target solution more often that the age-layered population and standard population methods. The multi-objective method also performs better on higher complexity problems and higher dimensional datasets -- finding global optima with less computational effort.",
"title": ""
},
{
"docid": "c487af41ead3ee0bc8fe6c95b356a80b",
"text": "With such a large volume of material accessible from the World Wide Web, there is an urgent need to increase our knowledge of factors in#uencing reading from screen. We investigate the e!ects of two reading speeds (normal and fast) and di!erent line lengths on comprehension, reading rate and scrolling patterns. Scrolling patterns are de\"ned as the way in which readers proceed through the text, pausing and scrolling. Comprehension and reading rate are also examined in relation to scrolling patterns to attempt to identify some characteristics of e!ective readers. We found a reduction in overall comprehension when reading fast, but the type of information recalled was not dependent on speed. A medium line length (55 characters per line) appears to support e!ective reading at normal and fast speeds. This produced the highest level of comprehension and was also read faster than short lines. Scrolling patterns associated with better comprehension (more time in pauses and more individual scrolling movements) contrast with scrolling patterns used by faster readers (less time in pauses between scrolling). Consequently, e!ective readers can only be de\"ned in relation to the aims of the reading task, which may favour either speed or accuracy. ( 2001 Academic Press",
"title": ""
},
{
"docid": "8930fa7afc57acd9a6e664ad1801e81a",
"text": "How to construct models for speech/nonspeech discrimination is a crucial point for voice activity detectors (VADs). Semi-supervised learning is the most popular way for model construction in conventional VADs. In this correspondence, we propose an unsupervised learning framework to construct statistical models for VAD. This framework is realized by a sequential Gaussian mixture model. It comprises an initialization process and an updating process. At each subband, the GMM is firstly initialized using EM algorithm, and then sequentially updated frame by frame. From the GMM, a self-regulatory threshold for discrimination is derived at each subband. Some constraints are introduced to this GMM for the sake of reliability. For the reason of unsupervised learning, the proposed VAD does not rely on an assumption that the first several frames of an utterance are nonspeech, which is widely used in most VADs. Moreover, the speech presence probability in the time-frequency domain is a byproduct of this VAD. We tested it on speech from TIMIT database and noise from NOISEX-92 database. The evaluations effectively showed its promising performance in comparison with VADs such as ITU G.729B, GSM AMR, and a typical semi-supervised VAD.",
"title": ""
},
{
"docid": "3baec781f7b5aaab8598c3628ea0af3b",
"text": "Article history: Received 15 November 2010 Received in revised form 9 February 2012 Accepted 15 February 2012 Information professionals performing business activity related investigative analysis must routinely associate data from a diverse range of Web based general-interest business and financial information sources. XBRL has become an integral part of the financial data landscape. At the same time, Open Data initiatives have contributed relevant financial, economic, and business data to the pool of publicly available information on the Web but the use of XBRL in combination with Open Data remains at an early state of realisation. In this paper we argue that Linked Data technology, created for Web scale information integration, can accommodate XBRL data and make it easier to combine it with open datasets. This can provide the foundations for a global data ecosystem of interlinked and interoperable financial and business information with the potential to leverage XBRL beyond its current regulatory and disclosure role. We outline the uses of Linked Data technologies to facilitate XBRL consumption in conjunction with non-XBRL Open Data, report on current activities and highlight remaining challenges in terms of information consolidation faced by both XBRL and Web technologies. © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "86d8a61771cd14a825b6fc652f77d1d6",
"text": "The widespread of adult content on online social networks (e.g., Twitter) is becoming an emerging yet critical problem. An automatic method to identify accounts spreading sexually explicit content (i.e., adult account) is of significant values in protecting children and improving user experiences. Traditional adult content detection techniques are ill-suited for detecting adult accounts on Twitter due to the diversity and dynamics in Twitter content. In this paper, we formulate the adult account detection as a graph based classification problem and demonstrate our detection method on Twitter by using social links between Twitter accounts and entities in tweets. As adult Twitter accounts are mostly connected with normal accounts and post many normal entities, which makes the graph full of noisy links, existing graph based classification techniques cannot work well on such a graph. To address this problem, we propose an iterative social based classifier (ISC), a novel graph based classification technique resistant to the noisy links. Evaluations using large-scale real-world Twitter data show that, by labeling a small number of popular Twitter accounts, ISC can achieve satisfactory performance in adult account detection, significantly outperforming existing techniques.",
"title": ""
},
{
"docid": "5f606838b7158075a4b13871c5b6ec89",
"text": "The sentence is a standard textual unit in natural language processing applications. In many languages the punctuation mark that indicates the end-of-sentence boundary is ambiguous; thus the tokenizers of most NLP systems must be equipped with special sentence boundary recognition rules for every new text collection. As an alternative, this article presents an efficient, trainable system for sentence boundary disambiguation. The system, called Satz, makes simple estimates of the parts of speech of the tokens immediately preceding and following each punctuation mark, and uses these estimates as input to a machine learning algorithm that then classifies the punctuation mark. Satz is very fast both in training and sentence analysis, and its combined robustness and accuracy surpass existing techniques. The system needs only a small lexicon and training corpus, and has been shown to transfer quickly and easily from English to other languages, as demonstrated on French and German.",
"title": ""
},
{
"docid": "debd9e6eb7a3d19efe9dc6b80e4dee81",
"text": "Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. Existing QA systems face two major problems when evaluated on our dataset: (1) handling questions that contain coreferences to previous questions or answers, and (2) matching words or phrases in a question to corresponding entries in the associated table. We conclude by proposing strategies to handle both of these issues.",
"title": ""
},
{
"docid": "508ce0c5126540ad7f46b8f375c50df8",
"text": "Sex differences in children’s toy preferences are thought by many to arise from gender socialization. However, evidence from patients with endocrine disorders suggests that biological factors during early development (e.g., levels of androgens) are influential. In this study, we found that vervet monkeys (Cercopithecus aethiops sabaeus) show sex differences in toy preferences similar to those documented previously in children. The percent of contact time with toys typically preferred by boys (a car and a ball) was greater in male vervets (n = 33) than in female vervets (n = 30) (P < .05), whereas the percent of contact time with toys typically preferred by girls (a doll and a pot) was greater in female vervets than in male vervets (P < .01). In contrast, contact time with toys preferred equally by boys and girls (a picture book and a stuffed dog) was comparable in male and female vervets. The results suggest that sexually differentiated object preferences arose early in human evolution, prior to the emergence of a distinct hominid lineage. This implies that sexually dimorphic preferences for features (e.g., color, shape, movement) may have evolved from differential selection pressures based on the different behavioral roles of males and females, and that evolved object feature preferences may contribute to present day sexually dimorphic toy preferences in children. D 2002 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "aecc5e00e4be529c76d6d629310c8b5c",
"text": "For a user to perceive continuous interactive response time in a visualization tool, the rule of thumb is that it must process, deliver, and display rendered results for any given interaction in under 100 milliseconds. In many visualization systems, successive interactions trigger independent queries and caching of results. Consequently, computationally expensive queries like multidimensional clustering cannot keep up with rapid sequences of interactions, precluding visual benefits such as motion parallax. In this paper, we describe a heuristic prefetching technique to improve the interactive response time of KMeans clustering in dynamic query visualizations of multidimensional data. We address the tradeoff between high interaction and intense query computation by observing how related interactions on overlapping data subsets produce similar clustering results, and characterizing these similarities within a parameter space of interaction. We focus on the two-dimensional parameter space defined by the minimum and maximum values of a time range manipulated by dragging and stretching a one-dimensional filtering lens over a plot of time series data. Using calculation of nearest neighbors of interaction points in parameter space, we reuse partial query results from prior interaction sequences to calculate both an immediate best-effort clustering result and to schedule calculation of an exact result. The method adapts to user interaction patterns in the parameter space by reprioritizing the interaction neighbors of visited points in the parameter space. A performance study on Mesonet meteorological data demonstrates that the method is a significant improvement over the baseline scheme in which interaction triggers on-demand, exact-range clustering with LRU caching. We also present initial evidence that approximate, temporary clustering results are sufficiently accurate (compared to exact results) to convey useful cluster structure during rapid and protracted interaction.",
"title": ""
},
{
"docid": "1e6c2319e7c9e51cd4e31107d56bce91",
"text": "Marketing has been criticised from all spheres today since the real worth of all the marketing efforts can hardly be precisely determined. Today consumers are better informed and also misinformed at times due to the bombardment of various pieces of information through a new type of interactive media, i.e., social media (SM). In SM, communication is through dialogue channels wherein consumers pay more attention to SM buzz rather than promotions of marketers. The various forms of SM create a complex set of online social networks (OSN), through which word-of-mouth (WOM) propagates and influence consumer decisions. With the growth of OSN and user generated contents (UGC), WOM metamorphoses to electronic word-of-mouth (eWOM), which spreads in astronomical proportions. Previous works study the effect of external and internal influences in affecting consumer behaviour. However, today the need is to resort to multidisciplinary approaches to find out how SM influence consumers with eWOM and online reviews. This paper reviews the emerging trend of how multiple disciplines viz. Statistics, Data Mining techniques, Network Analysis, etc. are being integrated by marketers today to analyse eWOM and derive actionable intelligence.",
"title": ""
},
{
"docid": "2739acca1a61ca8b2738b1312ab857ab",
"text": "The Telecare Medical Information System (TMIS) provides a set of different medical services to the patient and medical practitioner. The patients and medical practitioners can easily connect to the services remotely from their own premises. There are several studies carried out to enhance and authenticate smartcard-based remote user authentication protocols for TMIS system. In this article, we propose a set of enhanced and authentic Three Factor (3FA) remote user authentication protocols utilizing a smartphone capability over a dynamic Cloud Computing (CC) environment. A user can access the TMIS services presented in the form of CC services using his smart device e.g. smartphone. Our framework transforms a smartphone to act as a unique and only identity required to access the TMIS system remotely. Methods, Protocols and Authentication techniques are proposed followed by security analysis and a performance analysis with the two recent authentication protocols proposed for the healthcare TMIS system.",
"title": ""
},
{
"docid": "42aa520e1c46749e7abc924c0f56442d",
"text": "Internet of Things is evolving heavily in these times. One of the major obstacle is energy consumption in the IoT devices (sensor nodes and wireless gateways). The IoT devices are often battery powered wireless devices and thus reducing the energy consumption in these devices is essential to lengthen the lifetime of the device without battery change. It is possible to lengthen battery lifetime by efficient but lightweight sensor data analysis in close proximity of the sensor. Performing part of the sensor data analysis in the end device can reduce the amount of data needed to transmit wirelessly. Transmitting data wirelessly is very energy consuming task. At the same time, the privacy and security should not be compromised. It requires effective but computationally lightweight encryption schemes. This survey goes thru many aspects to consider in edge and fog devices to minimize energy consumption and thus lengthen the device and the network lifetime.",
"title": ""
},
{
"docid": "fcbf97bfbcf63ee76f588a05f82de11e",
"text": "The Deliberation without Attention (DWA) effect refers to apparent improvements in decision-making following a period of distraction. It has been presented as evidence for beneficial unconscious cognitive processes. We identify two major concerns with this claim: first, as these demonstrations typically involve subjective preferences, the effects of distraction cannot be objectively assessed as beneficial; second, there is no direct evidence that the DWA manipulation promotes unconscious decision processes. We describe two tasks based on the DWA paradigm in which we found no evidence that the distraction manipulation led to decision processes that are subjectively unconscious, nor that it reduced the influence of presentation order upon performance. Crucially, we found that a lack of awareness of decision process was associated with poorer performance, both in terms of subjective preference measures used in traditional DWA paradigm and in an equivalent task where performance can be objectively assessed. Therefore, we argue that reliance on conscious memory itself can explain the data. Thus the DWA paradigm is not an adequate method of assessing beneficial unconscious thought.",
"title": ""
},
{
"docid": "aafaffb28d171e2cddadbd9b65539e21",
"text": "LCD column drivers have traditionally used nonlinear R-string style digital-to-analog converters (DAC). This paper describes an architecture that uses 840 linear charge redistribution 10/12-bit DACs to implement a 420-output column driver. Each DAC performs its conversion in less than 15 /spl mu/s and draws less than 5 /spl mu/A. This architecture allows 10-bit independent color control in a 17 mm/sup 2/ die for the LCD television market.",
"title": ""
},
{
"docid": "786f1bbc10cfb952c7709b635ec01fcf",
"text": "Artificial neural networks (NN) have shown a significant promise in difficult tasks like image classification or speech recognition. Even well-optimized hardware implementations of digital NNs show significant power consumption. It is mainly due to non-uniform pipeline structures and inherent redundancy of numerous arithmetic operations that have to be performed to produce each single output vector. This paper provides a methodology for the design of well-optimized power-efficient NNs with a uniform structure suitable for hardware implementation. An error resilience analysis was performed in order to determine key constraints for the design of approximate multipliers that are employed in the resulting structure of NN. By means of a search based approximation method, approximate multipliers showing desired tradeoffs between the accuracy and implementation cost were created. Resulting approximate NNs, containing the approximate multipliers, were evaluated using standard benchmarks (MNIST dataset) and a real-world classification problem of Street-View House Numbers. Significant improvement in power efficiency was obtained in both cases with respect to regular NNs. In some cases, 91% power reduction of multiplication led to classification accuracy degradation of less than 2.80%. Moreover, the paper showed the capability of the back propagation learning algorithm to adapt with NNs containing the approximate multipliers.",
"title": ""
}
] |
scidocsrr
|
b893e6d4add98f2b44b1983897b707c8
|
FHM + : Faster High-Utility Itemset Mining Using Length Upper-Bound Reduction
|
[
{
"docid": "6d8e7574f75b19edaee0b2cc8d4c1383",
"text": "High-utility itemset mining (HUIM) is an important data mining task with wide applications. In this paper, we propose a novel algorithm named EFIM (EFficient high-utility Itemset Mining), which introduces several new ideas to more efficiently discovers high-utility itemsets both in terms of execution time and memory. EFIM relies on two upper-bounds named sub-tree utility and local utility to more effectively prune the search space. It also introduces a novel array-based utility counting technique named Fast Utility Counting to calculate these upper-bounds in linear time and space. Moreover, to reduce the cost of database scans, EFIM proposes efficient database projection and transaction merging techniques. An extensive experimental study on various datasets shows that EFIM is in general two to three orders of magnitude faster and consumes up to eight times less memory than the state-of-art algorithms dHUP, HUI-Miner, HUP-Miner, FHM and UP-Growth+.",
"title": ""
}
] |
[
{
"docid": "c5c64d7fcd9b4804f7533978026dcfbd",
"text": "This paper presents a new method to control multiple micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. We use the fact that all magnetic agents orient to the global input magnetic field to modulate the local attraction-repulsion forces between nearby agents. Here we study these controlled interaction magnetic forces for agents at a water-air interface and devise two controllers to regulate the inter-agent spacing and heading of the set, for motion in two dimensions. Simulation and experimental demonstrations show the feasibility of the idea and its potential for the completion of complex tasks using teams of microrobots. Average tracking error of less than 73 μm and 14° is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical disk-shape agents with nominal radius of 500 μm and thickness of 80 μm operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "3cf4fe068901b9d4ccdcaea2232d7d4e",
"text": "Schizophrenia (SZ) is a complex mental disorder associated with genetic variations, brain development and activities, and environmental factors. There is an increasing interest in combining genetic, epigenetic and neuroimaging datasets to explore different level of biomarkers for the correlation and interaction between these diverse factors. Sparse Multi-Canonical Correlation Analysis (sMCCA) is a powerful tool that can analyze the correlation of three or more datasets. In this paper, we propose the sMCCA model for imaging genomics study. We show the advantage of sMCCA over sparse CCA (sCCA) through the simulation testing, and further apply it to the analysis of real data (SNPs, fMRI and methylation) from schizophrenia study. Some new genes and brain regions related to SZ disease are discovered by sMCCA and the relationships among these biomarkers are further discussed.",
"title": ""
},
{
"docid": "ed447f3f4bbe8478e9e1f3c4593dbf1b",
"text": "We revisit the fundamental question of Bitcoin's security against double spending attacks. While previous work has bounded the probability that a transaction is reversed, we show that no such guarantee can be effectively given if the attacker can choose when to launch the attack. Other approaches that bound the cost of an attack have erred in considering only limited attack scenarios, and in fact it is easy to show that attacks may not cost the attacker at all. We therefore provide a different interpretation of the results presented in previous papers and correct them in several ways. We provide different notions of the security of transactions that provide guarantees to different classes of defenders: merchants who regularly receive payments, miners, and recipients of large one-time payments. We additionally consider an attack that can be launched against lightweight clients, and show that these are less secure than their full node counterparts and provide the right strategy for defenders in this case as well. Our results, overall, improve the understanding of Bitcoin's security guarantees and provide correct bounds for those wishing to safely accept transactions.",
"title": ""
},
{
"docid": "6b4efbb3572eeb09536e2ec82825f2fb",
"text": "Well-designed games are good motivators by nature, as they imbue players with clear goals and a sense of reward and fulfillment, thus encouraging them to persist and endure in their quests. Recently, this motivational power has started to be applied to non- game contexts, a practice known as Gamification. This adds gaming elements to non-game processes, motivating users to adopt new behaviors, such as improving their physical condition, working more, or learning something new. This paper describes an experiment in which game-like elements were used to improve the delivery of a Master's level College course, including scoring, levels, leaderboards, challenges and badges. To assess how gamification impacted the learning experience, we compare the gamified course to its non-gamified version from the previous year, using different performance measures. We also assessed student satisfaction as compared to other regular courses in the same academic context. Results were very encouraging, showing significant increases ranging from lecture attendance to online participation, proactive behaviors and perusing the course reference materials. Moreover, students considered the gamified instance to be more motivating, interesting and easier to learn as compared to other courses. We finalize by discussing the implications of these results on the design of future gamified learning experiences.",
"title": ""
},
{
"docid": "1841d05590d1173711a2d47824a979cc",
"text": "Heater plates or sheets that are visibly transparent have many interesting applications in optoelectronic devices such as displays, as well as in defrosting, defogging, gas sensing and point-of-care disposable devices. In recent years, there have been many advances in this area with the advent of next generation transparent conducting electrodes (TCE) based on a wide range of materials such as oxide nanoparticles, CNTs, graphene, metal nanowires, metal meshes and their hybrids. The challenge has been to obtain uniform and stable temperature distribution over large areas, fast heating and cooling rates at low enough input power yet not sacrificing the visible transmittance. This review provides topical coverage of this important research field paying due attention to all the issues mentioned above.",
"title": ""
},
{
"docid": "30fb0e394f6c4bf079642cd492229b67",
"text": "Although modern communications services are susceptible to third-party eavesdropping via a wide range of possible techniques, law enforcement agencies in the US and other countries generally use one of two technologies when they conduct legally-authorized interception of telephones and other communications traffic. The most common of these, designed to comply with the 1994 Communications Assistance for Law Enforcement Act(CALEA), use a standard interface provided in network switches.\n This paper analyzes the security properties of these interfaces. We demonstrate that the standard CALEA interfaces are vulnerable to a range of unilateral attacks by the intercept target. In particular, because of poor design choices in the interception architecture and protocols, our experiments show it is practical for a CALEA-tapped target to overwhelm the link to law enforcement with spurious signaling messages without degrading her own traffic, effectively preventing call records as well as content from being monitored or recorded. We also identify stop-gap mitigation strategies that partially mitigate some of our identified attacks.",
"title": ""
},
{
"docid": "31c0dc8f0a839da9260bb9876f635702",
"text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.",
"title": ""
},
{
"docid": "39b5095283fd753013c38459a93246fd",
"text": "OBJECTIVE\nTo determine whether cannabis use in adolescence predisposes to higher rates of depression and anxiety in young adulthood.\n\n\nDESIGN\nSeven wave cohort study over six years.\n\n\nSETTING\n44 schools in the Australian state of Victoria.\n\n\nPARTICIPANTS\nA statewide secondary school sample of 1601 students aged 14-15 followed for seven years.\n\n\nMAIN OUTCOME MEASURE\nInterview measure of depression and anxiety (revised clinical interview schedule) at wave 7.\n\n\nRESULTS\nSome 60% of participants had used cannabis by the age of 20; 7% were daily users at that point. Daily use in young women was associated with an over fivefold increase in the odds of reporting a state of depression and anxiety after adjustment for intercurrent use of other substances (odds ratio 5.6, 95% confidence interval 2.6 to 12). Weekly or more frequent cannabis use in teenagers predicted an approximately twofold increase in risk for later depression and anxiety (1.9, 1.1 to 3.3) after adjustment for potential baseline confounders. In contrast, depression and anxiety in teenagers predicted neither later weekly nor daily cannabis use.\n\n\nCONCLUSIONS\nFrequent cannabis use in teenage girls predicts later depression and anxiety, with daily users carrying the highest risk. Given recent increasing levels of cannabis use, measures to reduce frequent and heavy recreational use seem warranted.",
"title": ""
},
{
"docid": "494636aeb3d02c02cce1db18b4ce63ee",
"text": "AIMS/BACKGROUND\nThe objective of this review was to define the impact of cementation mode on the longevity of different types of single tooth restorations and fixed dental prostheses (FDP).\n\n\nMETHODS\nLiterature search by PubMed as the major database was used utilizing the terms namely, adhesive techniques, all-ceramic crowns, cast-metal, cement, cementation, ceramic inlays, gold inlays, metal-ceramic, non-bonded fixed-partial-dentures, porcelain veneers, resin-bonded fixed-partial-dentures, porcelain-fused-to-metal, and implant-supported-restorations together with manual search of non-indexed literature. Cementation of root canal posts and cores were excluded. Due to lack of randomized prospective clinical studies in some fields of cementation, recommendations had to be based on lower evidence level (Centre of Evidence Based Medicine, Oxford) for special applications of current cements.\n\n\nRESULTS\nOne-hundred-and-twenty-five articles were selected for the review. The primary function of the cementation is to establish reliable retention, a durable seal of the space between the tooth and the restoration, and to provide adequate optical properties. The various types of cements used in dentistry could be mainly divided into two groups: Water-based cements and polymerizing cements. Water-based cements exhibited satisfying long-term clinical performance associated with cast metal (inlays, onlays, partial crowns) as well as single unit metal-ceramic FDPs and multiple unit FDPs with macroretentive preparation designs and adequate marginal fit. Early short-term clinical results with high-strength all-ceramic restorations luted with water-based cements are also promising. Current polymerizing cements cover almost all fields of water-based cements and in addition to that they are mainly indicated for non-retentive restorations. They are able to seal the tooth completely creating hybrid layer formation. Furthermore, adhesive capabilities of polymerizing cements allowed for bonded restorations, promoting at the same time the preservation of dental tissues.",
"title": ""
},
{
"docid": "a129ad8154320f7be949527843207b89",
"text": "Availability of several web services having a similar functionality has led to using quality of service (QoS) attributes to support services selection and management. To improve these operations and be performed proactively, time series ARIMA models have been used to forecast the future QoS values. However, the problem is that in this extremely dynamic context the observed QoS measures are characterized by a high volatility and time-varying variation to the extent that existing ARIMA models cannot guarantee accurate QoS forecasting where these models are based on a homogeneity (constant variation over time) assumption, which can introduce critical problems such as proactively selecting a wrong service and triggering unrequired adaptations and thus leading to follow-up failures and increased costs. To address this limitation, we propose a forecasting approach that integrates ARIMA and GARCH models to be able to capture the QoS attributes' volatility and provide accurate forecasts. Using QoS datasets of real-world web services we evaluate the accuracy and performance aspects of the proposed approach. Results show that the proposed approach outperforms the popular existing ARIMA models and improves the forecasting accuracy of QoS measures and violations by on average 28.7% and 15.3% respectively.",
"title": ""
},
{
"docid": "584de328ade02c34e36e2006f3e66332",
"text": "The HP-ASD technology has experienced a huge development in the last decade. This can be appreciated by the large number of recently introduced drive configurations on the market. In addition, many industrial applications are reaching MV operation and megawatt range or have experienced changes in requirements on efficiency, performance, and power quality, making the use of HP-ASDs more attractive. It can be concluded that, HP-ASDs is an enabling technology ready to continue powering the future of industry for the decades to come.",
"title": ""
},
{
"docid": "92c3738d8873eb223a5a478cc76c95b0",
"text": "Visual target tracking is one of the major fields in computer vision system. Object tracking has many practical applications such as automated surveillance system, military guidance, traffic management system, fault detection system, artificial intelligence and robot vision system. But it is difficult to track objects with image sensor. Especially, multiple objects tracking is harder than single object tracking. This paper proposes multiple objects tracking algorithm based on the Kalman filter. Our algorithm uses the Kalman filter as many as the number of moving objects in the image frame. If many moving objects exist in the image, however, we obtain multiple measurements. Therefore, precise data association is necessary in order to track multiple objects correctly. Another problem of multiple objects tracking is occlusion that causes merge and split. For solving these problems, this paper defines the cost function using some factors. Experiments using Matlab show that the performance of the proposed algorithm is appropriate for multiple objects tracking in real-time.",
"title": ""
},
{
"docid": "21393a1c52b74517336ef3e08dc4d730",
"text": "The technical part of these Guidelines and Recommendations, produced under the auspices of EFSUMB, provides an introduction to the physical principles and technology on which all forms of current commercially available ultrasound elastography are based. A difference in shear modulus is the common underlying physical mechanism that provides tissue contrast in all elastograms. The relationship between the alternative technologies is considered in terms of the method used to take advantage of this. The practical advantages and disadvantages associated with each of the techniques are described, and guidance is provided on optimisation of scanning technique, image display, image interpretation and some of the known image artefacts.",
"title": ""
},
{
"docid": "3e0dd3cf428074f21aaf202342003554",
"text": "Despite significant recent work, purely unsupervised techniques for part-of-speech (POS) tagging have not achieved useful accuracies required by many language processing tasks. Use of parallel text between resource-rich and resource-poor languages is one source of weak supervision that significantly improves accuracy. However, parallel text is not always available and techniques for using it require multiple complex algorithmic steps. In this paper we show that we can build POS-taggers exceeding state-of-the-art bilingual methods by using simple hidden Markov models and a freely available and naturally growing resource, the Wiktionary. Across eight languages for which we have labeled data to evaluate results, we achieve accuracy that significantly exceeds best unsupervised and parallel text methods. We achieve highest accuracy reported for several languages and show that our approach yields better out-of-domain taggers than those trained using fully supervised Penn Treebank.",
"title": ""
},
{
"docid": "f8def1217137641547921e3f52c0b4ae",
"text": "A 50-GHz charge pump phase-locked loop (PLL) utilizing an LC-oscillator-based injection-locked frequency divider (ILFD) was fabricated in 0.13-mum logic CMOS process. The PLL can be locked from 45.9 to 50.5 GHz and output power level is around -10 dBm. The operating frequency range is increased by tracking the self-oscillation frequencies of the voltage-controlled oscillator (VCO) and the frequency divider. The PLL including buffers consumes 57 mW from 1.5/0.8-V supplies. The phase noise at 50 kHz, 1 MHz, and 10 MHz offset from the carrier is -63.5, -72, and -99 dBc/Hz, respectively. The PLL also outputs second-order harmonics at frequencies between 91.8 and 101 GHz. The output frequency of 101 GHz is the highest for signals locked by a PLL fabricated using the silicon integrated circuits technology.",
"title": ""
},
{
"docid": "c773efb805899ee9e365b5f19ddb40bc",
"text": "In this paper, we overview the 2009 Simulated Car Racing Championship-an event comprising three competitions held in association with the 2009 IEEE Congress on Evolutionary Computation (CEC), the 2009 ACM Genetic and Evolutionary Computation Conference (GECCO), and the 2009 IEEE Symposium on Computational Intelligence and Games (CIG). First, we describe the competition regulations and the software framework. Then, the five best teams describe the methods of computational intelligence they used to develop their drivers and the lessons they learned from the participation in the championship. The organizers provide short summaries of the other competitors. Finally, we summarize the championship results, followed by a discussion about what the organizers learned about 1) the development of high-performing car racing controllers and 2) the organization of scientific competitions.",
"title": ""
},
{
"docid": "cb7b6c586f106518e234d893a341b238",
"text": "For more than thirty years, people have relied primarily on screen-based text and graphics to interact with computers. Whether the screen is placed on a desk, held in one’s hand, worn on one’s head, or embedded in the physical environment, the screen has cultivated a predominantly visual paradigm of humancomputer interaction. In this chapter, we discuss a growing space of interfaces in which physical objects play a central role as both physical representations and controls for digital information. We present an interaction model and key characteristics for such “tangible user interfaces,” and explore these characteristics in a number of interface examples. This discussion supports a newly integrated view of both recent and previous work, and points the way towards new kinds of computationally-mediated interfaces that more seamlessly weave together the physical and digital worlds.",
"title": ""
},
{
"docid": "30279db171fffe6fac561541a5d175ca",
"text": "Deformable displays can provide two major benefits compared to rigid displays: Objects of different shapes and deformabilities, situated in our physical environment, can be equipped with deformable displays, and users can benefit from their pre-existing knowledge about the interaction with physical objects when interacting with deformable displays. In this article we present InformationSense, a large, highly deformable cloth display. The article contributes to two research areas in the context of deformable displays: It presents an approach for the tracking of large, highly deformable surfaces, and it presents one of the first UX analyses of cloth displays that will help with the design of future interaction techniques for this kind of display. The comparison of InformationSense with a rigid display interface unveiled the trade-off that while users are able to interact with InformationSense more naturally and significantly preferred InformationSense in terms of joy of use, they preferred the rigid display interfaces in terms of efficiency. This suggests that deformable displays are already suitable if high hedonic qualities are important but need to be enhanced with additional digital power if high pragmatic qualities are required.",
"title": ""
}
] |
scidocsrr
|
85b826ebc9d413bc2f8cafc15f97553b
|
Deep Metric Learning for Visual Understanding: An Overview of Recent Advances
|
[
{
"docid": "fa82b75a3244ef2407c2d14c8a3a5918",
"text": "Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design.",
"title": ""
}
] |
[
{
"docid": "8ebdf482a0a258722906a26d26164ba6",
"text": "Vehicle detection is a challenging problem in autonomous driving systems, due to its large structural and appearance variations. In this paper, we propose a novel vehicle detection scheme based on multi-task deep convolutional neural networks (CNNs) and region-of-interest (RoI) voting. In the design of CNN architecture, we enrich the supervised information with subcategory, region overlap, bounding-box regression, and category of each training RoI as a multi-task learning framework. This design allows the CNN model to share visual knowledge among different vehicle attributes simultaneously, and thus, detection robustness can be effectively improved. In addition, most existing methods consider each RoI independently, ignoring the clues from its neighboring RoIs. In our approach, we utilize the CNN model to predict the offset direction of each RoI boundary toward the corresponding ground truth. Then, each RoI can vote those suitable adjacent bounding boxes, which are consistent with this additional information. The voting results are combined with the score of each RoI itself to find a more accurate location from a large number of candidates. Experimental results on the real-world computer vision benchmarks KITTI and the PASCAL2007 vehicle data set show that our approach achieves superior performance in vehicle detection compared with other existing published works.",
"title": ""
},
{
"docid": "eec40573db841727a1410e5408ae43ed",
"text": "The design of a compact low-loss magic-T is proposed. The planar magic-T incorporates the compact microstrip-slotline tee junction and small microstrip-slotline transition area to reduce slotline radiation. The experimental results show that the magic-T produces broadband in-phase and out-of-phase power combiner/divider responses, has an average in-band insertion loss of 0.3 dB and small in-band phase and amplitude imbalance of less than plusmn 1.6deg and plusmn 0.3 dB, respectively.",
"title": ""
},
{
"docid": "24a78bcc7c60ab436f6fd32bdc0d7661",
"text": "Passing the Turing Test is not a sensible goal for Artificial Intelligence. Adherence to Turing's vision from 1950 is now actively harmful to our field. We review problems with Turing's idea, and suggest that, ironically, the very cognitive science that he tried to create must reject his research goal.",
"title": ""
},
{
"docid": "a81f2102488e6d9599a5796b1b6eba57",
"text": "A content based image retrieval system (CBIR) is proposed to assist the dermatologist for diagnosis of skin diseases. First, after collecting the various skin disease images and their text information (disease name, symptoms and cure etc), a test database (for query image) and a train database of 460 images approximately (for image matching) are prepared. Second, features are extracted by calculating the descriptive statistics. Third, similarity matching using cosine similarity and Euclidian distance based on the extracted features is discussed. Fourth, for better results first four images are selected during indexing and their related text information is shown in the text file. Last, the results shown are compared according to doctor’s description and according to image content in terms of precision and recall and also in terms of a self developed scoring system. Keyword: Cosine similarity, Euclidian distance, Precision, Recall, Query image. 1. Basic introduction to cbir CBIR differs from classical information retrieval in that image databases are essentially unstructured, since digitized images consist purely of arrays of pixel intensities, with no inherent meaning. One of the key issues with any kind of image processing is the need to extract useful information from the raw data (such as recognizing the presence of particular shapes or textures) before any kind of reasoning about the image’s contents is possible. An example may make this clear. Many police forces now use automatic face recognition systems. Such systems may be used in one of two ways. Firstly, the image in front of the camera may be compared with a single individual’s database record to verify his or her identity. In this case, only two images are matched, a process few observers would call CBIR[15]. Secondly, the entire database may be searched to find the most closely matching images. This is a genuine example of CBIR. 2. Structure of CBIR model Basic modules and their brief discussion of a CBIR modal is described in the following Figure 1.Content based image retrieval system consists of following modules: Feature Extraction: In this module the features of interest are calculated for image database. Fig.1 Modules of CBIR modal Feature extraction of query image: This module calculates the feature of the query image. Query image can be a part of image database or it may not be a part of image database. Similarity measure: This module compares the feature database of the existing images with the query image on basis of the similarity measure of the interest[2]. Image Database Feature database Feature Extraction Results images Query image Indexing Similarity measure Feature extraction of query image ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 4, No.5 , September 2013 ISSN : 2322-5157 www.ACSIJ.org 89 Copyright (c) 2013 Advances in Computer Science: an International Journal. All Rights Reserved. Indexing: This module performs filtering of images based on their content would provide better indexing and return more accurate results. Retrieval and Result: This module will display the matching images to the user based on indexing of similarity measure. Basic Components of the CBIR system are: Image Database: Database which stores images. It can be normal drive storage or database storage. Feature database: The entire extracted feature are stored in database like mat file, excel sheets etc. 3. Scope of CBIR for skin disease images Skin diseases are well known to be a large family. The identification of a certain skin disease is a complex and demanding task for dermatologist. A computer aided system can reduce the work load of the dermatologists, especially when the image database is immense. However, most contemporary work on computer aided analysis skin disease focuses on the detection of malignant melanoma. Thus, the features they used are very limited. The goal of our work is to build a retrieval algorithm for the more general diagnosis of various types of skin diseases. It can be very complex to define the features that can best distinguish between classes and yet be consistent within the same class of skin disease. Image and related Text Database is collected from a demonologist’s websites [17, 18]. There are mainly two kinds of methods for the application of a computer assistant. One is text query. A universally accepted and comprehensive dermatological terminology is created, and then example images are located and viewed using dermatological diagnostic concepts using a partial or complete word search. But the use of only descriptive annotation is too coarse and it is easy to make different types of disease fall into same category. The other method is to use visual features derived from color images of the diseased skin. The ability to perform reliable and consistent clinical research in dermatology hinges not only on the ability to accurately describe and codify diagnostic information, but also complex visual data. Visual patterns and images are at the core of dermatology education, research and practice. Visual features are broadly used in melanoma research, skin classification and segmentation. But there is a lack of tools using content-based skin image retrieval. 4. Problem formulation However, with the emergence of massive image databases, the traditional manual and text based search suffers from the following limitations: Manual annotations require too much time and are expensive to implement. As the number of images in a database grows, the difficulty in finding desired images increases. It is not feasible to manually annotate all attributes of the image content for large number of images. Manual annotations fail to deal with the discrepancy of subjective perception. The phrase, “an image says more than a thousand words,” implies a Content-Based Approach to Medical Image Database Retrieval that the textual description is not sufficient for depicting subjective perception. Typically, a medical image usually contains several objects, which convey specific information. Nevertheless, different interpretations for a pathological area can be made by different radiologists. To capture all knowledge, concepts, thoughts, and feelings for the content of any images is almost impossible. 5. Methodology of work 5.1General approach The general approach of image retrieval systems is based on query by image content. Figure 2 illustrate an overview of the image retrieval modal of skin disease images of proposed work. Fig.2 Overview of the Image query based skin disease image retrieval process FIRST FOUR RESULT IMAGES AND CORRESPONDI NG TEXT INFORMATION SKIN DISEASE IMAGE RETRIVAL SYSTEM IMAGE PRE PROCESSING RELATED SKIN DISEASE IMAGES (TRAIN DATABASE) AND TEXT INFO QUERY IMAGE FROM TEST DATABASE FEEDBACK FROM USER TEST DATABASE TEXT DATABASE TRAIN DATABASE ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 4, No.5 , September 2013 ISSN : 2322-5157 www.ACSIJ.org 90 Copyright (c) 2013 Advances in Computer Science: an International Journal. All Rights Reserved. 5.2 Database details : Our train database contains total 460 images (approximately) which are divided into twenty eight classes of skin disease, collected from reputed websites of medical images [17,18]. Test database contains images which are selected as query image. In the present work size of train database and test database is same. All the images are in .JPEG format. Images pixel dimension is set 300X300 by preprocessing. The illumination condition was also unknown for each image. Also, the images were collected with various backgrounds. Text database corresponding to each image contains skin disease name, symptoms, cure, and description of the disease. 5.3 Use Of Descriptive Statistics Parameters for Feature Extraction Statistical texture measures are calculated directly from the original image values, like mean, standard deviation, variance, kurtosis and Skewness [13], which do not consider pixel neighborhood relationships. Statistical measure of randomness that can be used to characterize the texture of the input image. Standard deviation is pixel value analysis feature [11]. First order statistics of the gray level allocation for each image matrix I(x, y) were examined through five commonly used metrics, namely, mean, variance, standard deviation, skewness and kurtosis as descriptive measurements of the overall gray level distribution of an image. Descriptive statistics refers to properties of distributions, such as location, dispersion, and shape [15]. 5.3.1 Location Measure: Location statistics describe where the data is located. Mean : For calculating the mean of element of vector x. ( ) = ( )/ if x is a matrix , compute the mean of each column and return them into a row vector[16]. 5.3.2 Dispersion Measures: Dispersion statistics summarize the scatter or spread of the data. Most of these functions describe deviation from a particular location. For instance, variance is a measure of deviation from the mean, and standard deviation is just the square root of the variance. Variance : For calculating the variance of element of vector x. ( ) = 1/(( − 1) _ ( ) − ( )^2) If x is a matrix , compute the variance of each column and return them into a row vector [16]. Standard Deviation: For calculating the Standard Deviation of element of vector x. ( ) = (1/( − 1) _ ( ( ) − ( ))^2) If x is a matrix , compute the Standard Deviation of each column and return them into a row vector[16]. 5.3.3 Shape Measures: For getting some information about the shape of a distribution using shape statistics. Skewness describes the amount of asymmetry. Kurtosis measures the concentration of data around the peak and in the tails versus the concentration in the flanks. Skewness: For calculating the skewness of element of vector x. ( ) = 1/ ( ) ^ (−3) (( − ( ). ^3) If x is a matrix, return the skewness along the first nonsingleton dimension of the matrix [",
"title": ""
},
{
"docid": "4bd7a933cf0d54a84c106a1591452565",
"text": "Face anti-spoofing (a.k.a. presentation attack detection) has recently emerged as an active topic with great significance for both academia and industry due to the rapidly increasing demand in user authentication on mobile phones, PCs, tablets, and so on. Recently, numerous face spoofing detection schemes have been proposed based on the assumption that training and testing samples are in the same domain in terms of the feature space and marginal probability distribution. However, due to unlimited variations of the dominant conditions (illumination, facial appearance, camera quality, and so on) in face acquisition, such single domain methods lack generalization capability, which further prevents them from being applied in practical applications. In light of this, we introduce an unsupervised domain adaptation face anti-spoofing scheme to address the real-world scenario that learns the classifier for the target domain based on training samples in a different source domain. In particular, an embedding function is first imposed based on source and target domain data, which maps the data to a new space where the distribution similarity can be measured. Subsequently, the Maximum Mean Discrepancy between the latent features in source and target domains is minimized such that a more generalized classifier can be learned. State-of-the-art representations including both hand-crafted and deep neural network learned features are further adopted into the framework to quest the capability of them in domain adaptation. Moreover, we introduce a new database for face spoofing detection, which contains more than 4000 face samples with a large variety of spoofing types, capture devices, illuminations, and so on. Extensive experiments on existing benchmark databases and the new database verify that the proposed approach can gain significantly better generalization capability in cross-domain scenarios by providing consistently better anti-spoofing performance.",
"title": ""
},
{
"docid": "3ed0e387f8e6a8246b493afbb07a9312",
"text": "Van den Ende-Gupta Syndrome (VDEGS) is an autosomal recessive disorder characterized by blepharophimosis, distinctive nose, hypoplastic maxilla, and skeletal abnormalities. Using homozygosity mapping in four VDEGS patients from three consanguineous families, Anastacio et al. [Anastacio et al. (2010); Am J Hum Genet 87:553-559] identified homozygous mutations in SCARF2, located at 22q11.2. Bedeschi et al. [2010] described a VDEGS patient with sclerocornea and cataracts with compound heterozygosity for the common 22q11.2 microdeletion and a hemizygous SCARF2 mutation. Because sclerocornea had been described in DiGeorge-velo-cardio-facial syndrome but not in VDEGS, they suggested that the ocular abnormalities were caused by the 22q11.2 microdeletion. We report on a 23-year-old male who presented with bilateral sclerocornea and the VDGEGS phenotype who was subsequently found to be homozygous for a 17 bp deletion in exon 4 of SCARF2. The occurrence of bilateral sclerocornea in our patient together with that of Bedeschi et al., suggests that the full VDEGS phenotype may include sclerocornea resulting from homozygosity or compound heterozygosity for loss of function variants in SCARF2.",
"title": ""
},
{
"docid": "8e077186aef0e7a4232eec0d8c73a5a2",
"text": "The appetite for up-to-date information about earth’s surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection. 2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3595188804ba47f745c7b6b8f17c45c0",
"text": "This paper presents a novel electrocardiogram (ECG) processing technique for joint data compression and QRS detection in a wireless wearable sensor. The proposed algorithm is aimed at lowering the average complexity per task by sharing the computational load among multiple essential signal-processing tasks needed for wearable devices. The compression algorithm, which is based on an adaptive linear data prediction scheme, achieves a lossless bit compression ratio of 2.286x. The QRS detection algorithm achieves a sensitivity (Se) of 99.64% and positive prediction (+P) of 99.81% when tested with the MIT/BIH Arrhythmia database. Lower overall complexity and good performance renders the proposed technique suitable for wearable/ambulatory ECG devices.",
"title": ""
},
{
"docid": "5acad83ce99c6403ef20bfa62672eafd",
"text": "A large class of sequential decision-making problems under uncertainty can be modeled as Markov and Semi-Markov Decision Problems, when their underlying probability structure has a Markov chain. They may be solved by using classical dynamic programming methods. However, dynamic programming methods suffer from the curse of dimensionality and break down rapidly in face of large state spaces. In addition, dynamic programming methods require the exact computation of the so-called transition probabilities, which are often hard to obtain and are hence said to suffer from the curse of modeling as well. In recent years, a simulation-based method, called reinforcement learning, has emerged in the literature. It can, to a great extent, alleviate stochastic dynamic programming of its curses by generating near-optimal solutions to problems having large state-spaces and complex transition mechanisms. In this paper, a simulation-based algorithm that solves Markov and Semi-Markov decision problems is presented, along with its convergence analysis. The algorithm involves a step-size based transformation on two time scales. Its convergence analysis is based on a recent result on asynchronous convergence of iterates on two time scales. We present numerical results from the new algorithm on a classical preventive maintenance case study of a reasonable size, where results on the optimal policy are also available. In addition, we present a tutorial that explains the framework of reinforcement learning in the context of semi-Markov decision problems for long-run average cost.",
"title": ""
},
{
"docid": "56c66b0c2698d63d9ef5f690688ee36d",
"text": "This article presents the author's personal reflection on how her nursing practice was enhanced as a result of losing her voice. Surprisingly, being unable to speak appeared to improve the nurse/patient relationship. Patients responded positively to a quiet approach and silent communication. Indeed, the skilled use of non-verbal communication through silence, facial expression, touch and closer physical proximity appeared to facilitate active listening, and helped to develop empathy, intuition and presence between the nurse and patient. Quietly 'being with' patients and communicating non-verbally was an effective form of communication. It is suggested that effective communication is dependent on the nurse's ability to listen and utilize non-verbal communication skills. In addition, it is clear that reflection on practical experience can be an important method of uncovering and exploring tacit knowledge in nursing.",
"title": ""
},
{
"docid": "5c0f2bcde310b7b76ed2ca282fde9276",
"text": "With the increasing prevalence of Alzheimer's disease, research focuses on the early computer-aided diagnosis of dementia with the goal to understand the disease process, determine risk and preserving factors, and explore preventive therapies. By now, large amounts of data from multi-site studies have been made available for developing, training, and evaluating automated classifiers. Yet, their translation to the clinic remains challenging, in part due to their limited generalizability across different datasets. In this work, we describe a compact classification approach that mitigates overfitting by regularizing the multinomial regression with the mixed ℓ1/ℓ2 norm. We combine volume, thickness, and anatomical shape features from MRI scans to characterize neuroanatomy for the three-class classification of Alzheimer's disease, mild cognitive impairment and healthy controls. We demonstrate high classification accuracy via independent evaluation within the scope of the CADDementia challenge. We, furthermore, demonstrate that variations between source and target datasets can substantially influence classification accuracy. The main contribution of this work addresses this problem by proposing an approach for supervised domain adaptation based on instance weighting. Integration of this method into our classifier allows us to assess different strategies for domain adaptation. Our results demonstrate (i) that training on only the target training set yields better results than the naïve combination (union) of source and target training sets, and (ii) that domain adaptation with instance weighting yields the best classification results, especially if only a small training component of the target dataset is available. These insights imply that successful deployment of systems for computer-aided diagnostics to the clinic depends not only on accurate classifiers that avoid overfitting, but also on a dedicated domain adaptation strategy.",
"title": ""
},
{
"docid": "b46a9871dc64327f1ab79fa22de084ce",
"text": "Traditional address scanning attacks mainly rely on the naive 'brute forcing' approach, where the entire IPv4 address space is exhaustively searched by enumerating different possibilities. However, such an approach is inefficient for IPv6 due to its vast subnet size (i.e., 2^64). As a result, it is widely assumed that address scanning attacks are less feasible in IPv6 networks. In this paper, we evaluate new IPv6 reconnaissance techniques in real IPv6 networks and expose how to leverage the Domain Name System (DNS) for IPv6 network reconnaissance. We collected IPv6 addresses from 5 regions and 100,000 domains by exploiting DNS reverse zone and DNSSEC records. We propose a DNS Guard (DNSG) to efficiently detect DNS reconnaissance attacks in IPv6 networks. DNSG is a plug and play component that could be added to the existing infrastructure. We implement DNSG using Bro and Suricata. Our results demonstrate that DNSG could effectively block DNS reconnaissance attacks.",
"title": ""
},
{
"docid": "48f7388fdf91a85cfeeee0d35e19c889",
"text": "Public key infrastructures (PKIs) are of crucial importance for the life of online services relying on certificate-based authentication, like e-commerce, e-government, online banking, as well as e-mail, social networking, cloud services and many others. One of the main points of failure (POFs) of modern PKIs concerns reliability and security of certificate revocation lists (CRLs), that must be available and authentic any time a certificate is used. Classically, the CRL for a set of certificates is maintained by the same (and sole) certification authority (CA) that issued the certificates, and this introduces a single POF in the system. We address this issue by proposing a solution in which multiple CAs share a public, decentralized and robust ledger where CRLs are collected. For this purpose, we consider the model of public ledgers based on blockchains, introduced for the use in cryptocurrencies, that is becoming a widespread solution for many online applications with stringent security and reliability requirements.",
"title": ""
},
{
"docid": "1672b30a74bf5d1111b1f0892b4018bc",
"text": "From the Divisions of Rheumatology, Allergy, and Immunology (M.R.M.) and Cardiology (D.M.D.); and the Departments of Radiology (J.Y.S.) and Pathology (R.P.H.), Massachusetts General Hospital; the Division of Rheumatology, Allergy, and Immunology, Brigham and Women’s Hospital (M.C.C.); and the Departments of Medicine (M.R.M., M.C.C., D.M.D.), Radiology (J.Y.S.), and Pathology (R.P.H.), Harvard Medical School — all in Boston.",
"title": ""
},
{
"docid": "b38f1dbd7b13c8b0ffd3277c5b62ba7f",
"text": "It is very difficult to find feasible QoS (Quality of service) routes in the mobile ad hoc networks (MANETs), because of the nature constrains of it, such as dynamic network topology, wireless communication link and limited process capability of nodes. In order to reduce average cost in flooding path discovery scheme of the traditional MANETs routing protocols and increase the probability of success in finding QoS feasible paths and It proposed a heuristic and distributed route discovery new method supports QoS requirement for MANETs in this study. This method integrates a distributed route discovery scheme with a Reinforcement Learning (RL) method that only utilizes the local information for the dynamic network environment; and the route expand scheme based on Cluster based Routing Algorithms (CRA) method to find more new feasible paths and avoid the problem of optimize timing in previous smart net Quality of service in MANET. In this paper proposed method Compared with traditional method, the experiment results shoItd the network performance is improved optimize timing, efficient and effective.",
"title": ""
},
{
"docid": "0701f4d74179857b736ebe2c7cdb78b7",
"text": "Modern computer networks generate significant volume of behavioural system logs on a daily basis. Such networks comprise many computers with Internet connectivity, and many users who access the Web and utilise Cloud services make use of numerous devices connected to the network on an ad-hoc basis. Measuring the risk of cyber attacks and identifying the most recent modus-operandi of cyber criminals on large computer networks can be difficult due to the wide range of services and applications running within the network, the multiple vulnerabilities associated with each application, the severity associated with each vulnerability, and the ever-changing attack vector of cyber criminals. In this paper we propose a framework to represent these features, enabling real-time network enumeration and traffic analysis to be carried out, in order to produce quantified measures of risk at specific points in time. We validate the approach using data from a University network, with a data collection consisting of 462,787 instances representing threats measured over a 144 hour period. Our analysis can be generalised to a variety of other contexts. © 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).",
"title": ""
},
{
"docid": "f3fb98614d1d8ff31ca977cbf6a15a9c",
"text": "Paraphrase Identification and Semantic Similarity are two different yet well related tasks in NLP. There are many studies on these two tasks extensively on structured texts in the past. However, with the strong rise of social media data, studying these tasks on unstructured texts, particularly, social texts in Twitter is very interesting as it could be more complicated problems to deal with. We investigate and find a set of simple features which enables us to achieve very competitive performance on both tasks in Twitter data. Interestingly, we also confirm the significance of using word alignment techniques from evaluation metrics in machine translation in the overall performance of these tasks.",
"title": ""
},
{
"docid": "6a345950bb08717f52aeb87c859f72f2",
"text": "This paper presents Anonymouth, a novel framework for anonymizing writing style. Without accounting for style, anonymous authors risk identification. This framework is necessary to provide a tool for testing the consistency of anonymized writing style and a mechanism for adaptive attacks against stylometry techniques. Our framework defines the steps necessary to anonymize documents and implements them. A key contribution of this work is this framework, including novel methods for identifying which features of documents need to change and how they must be changed to accomplish document anonymization. In our experiment, 80% of the user study participants were able to anonymize their documents in terms of a fixed corpus and limited feature set used. However, modifying pre-written documents were found to be difficult and the anonymization did not hold up to more extensive feature sets. It is important to note that Anonymouth is only the first step toward a tool to acheive stylometric anonymity with respect to state-of-the-art authorship attribution techniques. The topic needs further exploration in order to accomplish significant anonymity.",
"title": ""
},
{
"docid": "58fb566facf511f6295126eebab521d7",
"text": "UNLABELLED\n Traditional wound tracing technique consists of tracing the perimeter of the wound on clear acetate with a fine-tip marker, then placing the tracing on graph paper and counting the grids to calculate the surface area. Standard wound measurement technique for calcu- lating wound surface area (wound tracing) was compared to a new wound measurement method using digital photo-planimetry software ([DPPS], PictZar® Digital Planimetry).\n\n\nMETHODS\nTwo hundred wounds of varying etiologies were measured and traced by experienced exam- iners (raters). Simultaneously, digital photographs were also taken of each wound. The digital photographs were downloaded onto a PC, and using DPPS software, the wounds were measured and traced by the same examiners. Accuracy, intra- and interrater reliability of wound measurements obtained from tracings and from DPPS were studied and compared. Both accuracy and rater variability were directly related to wound size when wounds were measured and traced in the tradi- tional manner.\n\n\nRESULTS\nIn small (< 4 cm2), regularly shaped (round or oval) wounds, both accuracy and rater reliability was 98% and 95%, respectively. However, in larger, irregularly shaped wounds or wounds with epithelial islands, DPPS was more accurate than traditional mea- suring (3.9% vs. 16.2% [average error]). The mean inter-rater reliabil- ity score was 94% for DPPS and 84% for traditional measuring. The mean intrarater reliability score was 98.3% for DPPS and 89.3% for traditional measuring. In contrast to traditional measurements, DPPS may provide a more objective assessment since it can be done by a technician who is blinded to the treatment plan. Planimetry of digital photographs allows for a closer examination (zoom) of the wound and better visibility of advancing epithelium.\n\n\nCONCLUSION\nMeasurements of wounds performed on digital photographs using planimetry software were simple and convenient. It was more accurate, more objective, and resulted in better correlation within and between examiners. .",
"title": ""
},
{
"docid": "78e4395a6bd6b4424813e20633d140b8",
"text": "This paper introduces a high-speed CMOS comparator. The comparator consists of a differential input stage, two regenerative flip-flops, and an S-R latch. No offset cancellation is exploited, which reduces the power consumption as well as the die area and increases the comparison speed. An experimental version of the comparator has been integrated in a standard double-poly double-metal 1.5-pm n-well process with a die area of only 140 x 100 pmz. This circuit, operating under a +2.5/– 2.5-V power supply, performs comparison to a precision of 8 b with a symmetrical input dynamic range of 2.5 V (therefore ~0.5 LSB resolution is equal to ~ 4.9 mV). input stage flip-flops S-R Iat",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.